WO2023151698A1 - Handprint acquisition system - Google Patents

Handprint acquisition system Download PDF

Info

Publication number
WO2023151698A1
WO2023151698A1 PCT/CN2023/076009 CN2023076009W WO2023151698A1 WO 2023151698 A1 WO2023151698 A1 WO 2023151698A1 CN 2023076009 W CN2023076009 W CN 2023076009W WO 2023151698 A1 WO2023151698 A1 WO 2023151698A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
image acquisition
handprint
fusion
Prior art date
Application number
PCT/CN2023/076009
Other languages
French (fr)
Chinese (zh)
Inventor
汤林鹏
邰骋
祝素伟
陈军航
张舒畅
Original Assignee
墨奇科技(北京)有限公司
北京至简墨奇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210135632.3A external-priority patent/CN116631015A/en
Priority claimed from CN202220304780.9U external-priority patent/CN218004132U/en
Priority claimed from CN202210344908.9A external-priority patent/CN114862673A/en
Priority claimed from CN202220755751.4U external-priority patent/CN217767172U/en
Priority claimed from CN202221118200.3U external-priority patent/CN217426154U/en
Priority claimed from CN202320061548.1U external-priority patent/CN219497088U/en
Priority claimed from CN202310029644.2A external-priority patent/CN117218682A/en
Application filed by 墨奇科技(北京)有限公司, 北京至简墨奇科技有限公司 filed Critical 墨奇科技(北京)有限公司
Publication of WO2023151698A1 publication Critical patent/WO2023151698A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor

Definitions

  • the present application relates to the technical field of non-contact handprint collection, and in particular, to a handprint collection system.
  • Handprint recognition technology is one of many biometrics.
  • biometric identification technology refers to the technology that uses the inherent physiological or behavioral characteristics of the human body to carry out personal identification. Due to the advantages of convenience and security of biometric identification, biometric identification technology has broad application prospects in the fields of identity authentication and network security. Available biometric identification technologies include handprint (such as fingerprints, etc.), face, voiceprint, iris, etc., among which handprint is the most widely used one.
  • Handprint recognition technology first needs to carry out the collection of handprint.
  • the compatibility of different groups of people is not good enough.
  • the effect of collecting handprints under various conditions such as over-dry, over-wet and shallow handprints is limited, and it is difficult to meet the collection needs of various groups of people.
  • a handprint collection system including: a lighting system, the lighting system includes one or more blue light sources and one or more green A light source, each light source is used to emit light toward the handprint collection area, and the handprint collection area is used to place the object to be photographed, and the object to be photographed includes at least one of single finger, double thumb, four fingers, flat palm, and side palm; a structured light projector, The structured light projector is used to emit structured light to the handprint collection area; one or more image acquisition devices, and one or more image acquisition devices are used to collect handprints while the lighting system and the structured light projector emit light to the handprint collection area The image of the collection area is obtained to obtain the handprint image of the subject; the processing device is used to control the lighting system and the structured light projector to emit light and control one or more image acquisition devices to collect the handprint image; the processing device is also used to process a Process the handprint images collected by one or more image acquisition
  • the blue light source and the green light source are used to supplement light for the image collection device when collecting the handprint image. This can expand the handprint acquisition system to different skin The applicability of the skin condition, which is conducive to taking into account the collection needs of various groups of people.
  • Figure 1a shows a schematic diagram of a structured light projector according to an embodiment of the present application
  • Figures 1b-1d show schematic diagrams of structured light patterns according to an embodiment of the present application
  • FIG. 2 shows a schematic diagram of a target acquisition system and related target objects and target acquisition areas according to an embodiment of the present application
  • FIG. 3 shows a schematic diagram of a mosaic image obtained by directly superimposing and stitching multiple fingerprint images together in the prior art
  • FIG. 4 shows a schematic flowchart of an image stitching method according to an embodiment of the present application
  • Fig. 5 shows a schematic diagram of reconstructing a structured light pattern according to an embodiment of the present application
  • FIG. 6 shows an example of a structured light pattern including a stitched structured light pattern according to an embodiment of the present application
  • FIGS. 7a-7c show schematic diagrams of three structured light images corresponding to fingerprint acquisition at three different angles
  • Fig. 8 shows a schematic diagram of a structured light image after nail removal according to an embodiment of the present application
  • Fig. 9 shows a simple schematic diagram of collecting structured light stripes by a camera according to an embodiment of the present application.
  • Fig. 10 shows a schematic diagram of a visualized three-dimensional point cloud model according to an embodiment of the present application
  • Figures 11a-11b show a schematic diagram of the deduplication operation before the model is expanded along the common coordinate axis according to an embodiment of the present application
  • Fig. 12 shows a schematic diagram of a three-dimensional model unfolded along a common coordinate axis according to an embodiment of the present application
  • Fig. 13 shows a schematic diagram of stitching two images
  • Fig. 14 shows a schematic diagram of a handprint collection system and related fingers and handprint collection areas according to an embodiment of the present application
  • Fig. 15 shows a schematic diagram of a part of the handprint collection system according to one embodiment of the present application.
  • Fig. 16 shows a schematic diagram of a partial structure of a handprint collection system and related fingers and handprint collection areas according to an embodiment of the present application
  • Fig. 17 shows a schematic diagram of image acquisition and light source lighting sequence according to an embodiment of the present application.
  • Fig. 18 is a perspective view of a non-contact handprint collection device according to an exemplary embodiment of the present application.
  • Fig. 19 is a front view of a non-contact handprint collection device according to an exemplary embodiment of the present application.
  • Fig. 20 is a top view of a non-contact handprint collection device according to an exemplary embodiment of the present application.
  • Fig. 21 is a side view of a non-contact handprint collection device according to an exemplary embodiment of the present application.
  • Fig. 22 is an exploded view of a non-contact handprint collection device according to an exemplary embodiment of the present application.
  • Figure 23 is an axonometric view of the non-contact handprint collection device shown in Figure 22;
  • Fig. 24 is a cross-sectional view of the non-contact handprint collection device shown in Fig. 23;
  • Fig. 25 is an enlarged view of the handprint collection part of the non-contact handprint collection device shown in Fig. 22;
  • Fig. 26 is a top view of the handprint collection part shown in Fig. 25;
  • Fig. 27 is a front view of the handprint collection part shown in Fig. 25;
  • Fig. 28 is a side view of the handprint collection part shown in Fig. 25;
  • Fig. 29 is a rear view of the handprint collection part shown in Fig. 25;
  • Fig. 30 shows a schematic flowchart of a non-contact target object handprint collection method according to an embodiment of the present application
  • Fig. 31A shows a schematic diagram of a structured light channel image according to an embodiment of the present application
  • Fig. 31B shows a schematic diagram of the arrangement of structured light repeating units in structured light according to another embodiment of the present application.
  • Fig. 32A shows a schematic diagram of a structured light channel image according to another embodiment of the present application.
  • Fig. 32B shows a schematic diagram of arrangement of structured light repeating units in structured light according to another embodiment of the present application.
  • Fig. 33 shows a schematic diagram of processing objects in an unstructured light channel image according to an embodiment of the present application
  • Figure 34 shows a schematic diagram of an unstructured light channel image according to one embodiment of the present application.
  • Fig. 35 shows a schematic block diagram of a non-contact handprint collection device according to an embodiment of the present application.
  • the handprint collection system includes an illumination system, a structured light projector, one or more image collection devices and a processing device.
  • the lighting system includes one or more blue light sources and one or more green light sources, each light source is used to emit light toward the handprint collection area, and the handprint collection area is used to place the object to be photographed, and the object to be photographed includes At least one of single finger, double thumb, four fingers, flat palm, and side palm.
  • the structured light projector is used for emitting structured light to the handprint collection area.
  • One or more image acquisition devices are used to collect images of the handprint collection area while the illumination system and the structured light projector emit light to the handprint collection area to obtain a handprint image of the subject.
  • the processing device is used to control the lighting system and the structured light projector to emit light and control one or more image acquisition devices to collect handprint images; the processing device is also used to process the handprint images collected by one or more image acquisition devices to obtain a Or the respective structured light image and illumination light image corresponding to a plurality of image acquisition devices; the illumination light image is fused and processed based on the transformation process of the structured light image to obtain the processed handprint image; wherein the illumination light image includes a blue light image (That is, the following blue channel image) and the green light image (ie, the following green channel image), the images used for fusion processing include at least one blue light image and at least one green light image, and the images used for fusion processing correspond to
  • the image acquisition devices are the same, the shooting objects are the same, the shooting (i.e. collecting) time interval is within the interval range, and the shooting lighting conditions are different. Different lighting conditions include at least one of the color of the light source and the lighting angle of the lighting system.
  • blue light sources and green light sources There may be multiple blue light sources and green light sources, and they may be arranged at different angles. In this way, by switching on and off the blue light source at different angles and/or adjusting the brightness, different blue light lighting conditions can be combined; similarly, different green light lighting conditions can be combined.
  • the interval range may be a preset time interval range, which may be set to an appropriate size as required.
  • the interval range may be expressed as [t 1 , t 2 ], for example, t 1 may be 0 second, and t 2 may be 0.1 second, 0.2 second, 0.3 second or other time values close to 0.
  • the shooting time of the images used for fusion processing may be the same or very close, that is, the upper limit of the interval range may be set to a time value close to 0.
  • the processing device can perform the following operations: 1. It is used to control the lighting system and the structured light projector to emit light and control one or more image acquisition devices to collect handprint images. That is, the processing means can be used to control the capture.
  • the way to control shooting may include pre-setting multiple different lighting conditions for shooting, for example, shooting P1 under the first lighting condition at t1, shooting P2 under the second lighting condition at t2, shooting P3 under the second lighting condition at t3, and then The better quality parts of P1, P2, and P3 can be merged and shot.
  • the method of controlling shooting may also include shooting first, and then shooting after adjusting the lighting conditions according to the captured image.
  • the method of controlling the shooting may also include determining whether the posture of the shooting object is qualified through the preview image acquisition device, and then starting the shooting after passing it.
  • the structured light image can be taken under the condition of only using the structured light source, and the attitude of the subject can be judged by the structured light image, or the image can be taken under the condition of using the structured light source and the illumination light source, and the structured light image and the illumination The light image captures the pose of the subject.
  • the handprint images collected by one or more image acquisition devices and obtain the corresponding structured light images and illumination light images of one or more image acquisition devices; for example, the images obtained by shooting are RGB three-channel images , wherein the image of the red channel is a structured light image, and the images of the blue and green channels are illumination light images; 3. Perform subsequent processing such as fusion processing and transformation processing based on the illumination light image and the structured light image.
  • the collected All images are used for subsequent processing, or a part of images may be selected for subsequent processing based on image quality, subject pose, and target image acquisition device (for example, a highly suitable image acquisition device).
  • the fusion processing of the illumination light images may include fusion of images under blue light source and green light source, and optionally may also include fusion of images under different illumination angles of the same color light source. Images under the same light source with the same light angle and different light intensities may not be fused.
  • the images used for fusion can be B1 and G1, and BG1 is obtained through fusion.
  • the camera corresponding to the image used for fusion is camera C1.
  • BG1 and BG2 are all blue-green fusion image, which belongs to the fusion of the same color and different lighting angles.
  • the camera corresponding to the image used for fusion is camera C1.
  • the image used for fusion can be: B1' obtained after B1 transformation taken by camera C1 and B1' obtained after B1 transformation taken by camera C2 and B1P obtained after splicing; G1' obtained after G1 transformation taken by camera C1 G1P obtained after splicing with G1' obtained after transformation of G1 captured by camera C2. It will be possible to fuse B1P and G1P.
  • the cameras corresponding to B1P used for fusion are camera C1 and camera C2, the cameras corresponding to G1P are camera C1 and camera C2, and the cameras corresponding to B1P and G1P are the same.
  • the wavelength of the blue light emitted by the blue light source is less than 430 nm, and the wavelength of the green light emitted by the green light source is greater than 540 nm.
  • performing fusion processing on the illumination light image and transformation processing based on the structured light image may include: performing fusion processing on the blue light image and the green light image in at least one illumination light image captured by the same image acquisition device to obtain a fusion image ; Based on the structured light image corresponding to at least one illumination light image, transforming the fused image to obtain a processed handprint image.
  • This solution is to first fuse the blue-light image and the green-light image taken by the same image acquisition device, and then transform the fused image.
  • the transformation process may be a reconstruction transformation based on reconstructed structured light patterns such as described below.
  • performing the fusion processing on the illumination light image and the transformation processing based on the structured light image may include: based on the structured light image corresponding to at least one illumination light image captured by the same image acquisition device, for the at least one illumination light image The blue image is transformed to obtain the transformed blue image, based on the structured light image corresponding to at least one illumination light image taken by the same image acquisition device, the green image in the at least one illumination light image is transformed to obtain the transformed The transformed green image is merged with the transformed blue image and the transformed green image to obtain the processed handprint image.
  • the blue-ray image and the green-light image are transformed first, and then the transformed images are fused.
  • At least one illumination light image includes a first illumination light image captured at a first moment and a second illumination image captured at a second moment; the blue light image and the green light image in at least one illumination light image captured by the same image acquisition device Performing fusion processing on the images to obtain a fusion image, including: performing fusion processing on the first blue-light image and the first green-light image in the first illumination light image captured by the same image acquisition device at the first moment, to obtain the first blue-green fusion image , performing fusion processing on the second blue-light image and the second green-light image in the second illumination light image captured by the same image acquisition device at the second moment to obtain a second blue-green fusion image, for the first blue-green fusion image and the second The two blue-green fused images are fused to obtain a fused image.
  • This solution is to fuse the four images of B1, G1, B2, and G2 (first B1+G1, B2+G2, and then BG1+BG2), and then transform the fused images.
  • At least one illumination light image includes a first illumination light image taken at a first moment and a second illumination image taken at a second moment;
  • the transformed blue image includes the first blue color in the first illumination light image
  • the determined first transformed green image and the second transformed green image determined by the second green image in the second illumination light image are fused with the transformed blue image and transformed green image to obtain the processed hand
  • the texture image includes: performing fusion processing on the first transformed blue image and the first transformed green image to obtain the first transformed fusion image, and performing fusion processing on the second transformed blue image and the second transformed green image to obtain the second transformed image
  • the images are fused, performing fusion processing on the first transformation fusion image and the second transformation fusion image to obtain a processed handprint image.
  • This scheme is to obtain B1', B2', G1', G2' through transformation processing, and then fuse 4 images (first B1'+G1' to get BG1', B2'+G2 to get BG2', and then BG1'+BG2 ').
  • the illumination light image includes a first illumination light image taken by the same image acquisition device at the first moment and a second illumination image taken at the second moment; fusion processing and transformation processing based on the structured light image are performed on the illumination light image,
  • Obtaining the processed handprint image includes: performing fusion processing on the first blue image in the first illumination light image and the first green image in the first illumination light image to obtain the first blue-green fusion image; performing fusion processing on the second blue image in the illumination light image and the second green image in the second illumination light image to obtain a second blue-green fusion image; Based on the first structured light image corresponding to the first illumination light image, the first blue-green fusion image is transformed to obtain the first fusion transformation image; based on the second structured light image corresponding to the second illumination light image, the second blue-green fusion image is obtained. performing transformation processing on the green fusion image to obtain a second fusion transformation image; performing fusion processing on the first fusion transformation image and the second fusion transformation image to obtain a processed handprint image.
  • the fusion processing includes calculation fusion and/or neural network model fusion;
  • calculation fusion includes: calculating the sharpness index of the pixel at the first position in the image used for fusion processing, and using the sharpness index in the image for fusion processing The pixel value of the pixel at the first position with a better index is used as the pixel value of the pixel at the first position in the obtained image after fusion;
  • the neural network model fusion includes: inputting the pixel at the second position in the image for fusion processing into the neural network The network model obtains the pixel value of the pixel at the second position in the fused image.
  • the sharpness index of the pixel at the first position in the blue image and the green image are respectively calculated, if the sharpness index of the pixel at the first position in the blue image is better than the sharpness index of the pixel at the first position in the green image, Then the pixel value of the pixel at the first position in the fused image is the pixel value of the pixel at the first position in the blue image.
  • the calculation method of the sharpness index at the first position is, for example, calculating the standard deviation between the pixel value of the pixel at the first position and the pixel value of the pixels in the neighborhood around the first position, and using the standard deviation as the sharpness index corresponding to the pixel at the first position.
  • the larger the standard deviation the more drastic the change of the pixel value in the neighborhood around the first position, and the higher the definition.
  • the standard deviation between the pixel values of the pixels in the first position is used as the sharpness index corresponding to the pixel in the first position.
  • the sharpness index may also be represented by the following quality score.
  • the pixel at the first position may be a single pixel or a plurality of pixels in an image area (for example, a 3*3 area), so as to perform image fusion at the pixel level/image area level.
  • an image area for example, a 3*3 area
  • each pixel position in the image can be taken as the first position in turn; when the pixel at the first position is multiple pixels in one image area, the image can be divided into multiple areas, and the Use each region as the first location.
  • all the positions in the image used for fusion processing may be used as the second positions or some positions that need to be fused are used as the second positions (for example, the positions that do not need to be fused are masked) to perform the above neural network model fusion.
  • B1, G1, B2, and G2 that need to be fused are input into the neural network to obtain a fused image.
  • pixel value optimization may be performed on boundary regions in the image obtained after fusion from different images used for fusion processing.
  • the connection can be made smoother and continuous.
  • the first position is a 3*3 area
  • the entire image is 9*9
  • the entire image contains 1-9 9 first positions
  • the first first position in the fusion image comes from the blue image
  • the first position in the fusion image The 2 first positions are from the green image, then rows 1-3 column 3 and The pixels in the 1st-3rd row and the 4th column are the pixels in the border area, and the pixel value optimization can be performed on the pixels in the border area.
  • the fusion process includes calculation fusion and neural network model fusion
  • the calculation fusion includes: calculating the sharpness index of the pixel at the first position in the image used for fusion processing, if the multiple images used for fusion processing are clear If the gap of the sharpness index is greater than the gap threshold, the pixel value of the pixel at the first position with a better sharpness index in the image for fusion processing is used as the pixel value of the pixel at the first position in the obtained image after fusion; the neural network model The fusion includes: inputting the pixel at the second position in the image for fusion processing into the neural network model to obtain the pixel value of the pixel at the second position in the image obtained after fusion; the second position is a plurality of images used for fusion processing The position where the gap of the medium sharpness index is not greater than the gap threshold.
  • Computational fusion and neural network model fusion can be used in combination. For example, for the pixels at the same position in the blue-light image and the green-light image, if the difference between the sharpness index of the pixel in the blue-light image and the sharpness index of the pixel in the green-light image is greater than the gap threshold, it means that the pixel at this position in the blue-light image is If the sharpness is significantly better than that of the pixel at this position in the green light image, then the pixel value of the pixel at this position in the blue light image is used as the pixel value of the pixel at this position in the fusion image; on the contrary, if the pixel value of the pixel at this position in the green light image If the difference between the sharpness index and the sharpness index of the pixel in the blue-ray image is greater than the gap threshold, it means that the sharpness of the pixel at this position in the green-light image is significantly better than that of the pixel at this position in the blue-light image.
  • the pixel value of the pixel at the position is used as the pixel value of the pixel at the position in the fused image. If the difference between the two is smaller than the threshold, it means that the sharpness indicators of the two are similar.
  • the neural network model fusion method can be used to determine the pixel value of the fused image at the position. Exemplarily, still taking the 9*9 fused image as an example, the fused image includes 81 pixels.
  • the clarity index of the blue image is significantly better than that of the green image
  • the clarity index of the green image is significantly better than that of the blue image
  • the blue image definition index is similar to the green image definition index
  • the green image except for positions 41-50 The position of is masked (for example, set to 0) and input to the neural network model fusion to obtain the 41st-50th pixels in the fusion image.
  • the pixels at positions 1-40 in the fused image are from the blue image
  • the pixels at positions 51-81 are from the green image. It is understandable that, after the combination of computational fusion and neural network model fusion, the pixel values from the boundary area in the fused image can also be optimized.
  • the neural network model used for image fusion can adopt existing model architecture and training methods.
  • the fused image obtained by the calculation method is used as the real value
  • the difference between the predicted fused image of the neural network model and the real value is used to determine the loss value
  • the neural network model is updated with the loss value.
  • Another example is to perform expert scoring on the fused images predicted by the neural network model, so that the neural network model can adjust network parameters to obtain better scores.
  • the above-mentioned one or more image acquisition devices have at least the following three setting schemes:
  • A1 Three image capture devices can be set up to capture images of a single finger. For example, you can The fingerprint images of a single finger are respectively collected by three image acquisition devices whose optical axes are at a certain angle to each other, and the fingerprint images respectively collected by the three image acquisition devices can be spliced to simulate rolling fingerprints.
  • A2 Six image acquisition devices can be provided, including three image acquisition devices whose optical axes are at an angle to each other and three image acquisition devices whose optical axes are substantially parallel, and optionally a preview image acquisition device.
  • the above six cameras and the optional preview image collection device can be used to collect single finger, double thumb, four finger, and palm prints.
  • the fingerprint images collected by the image acquisition device with three optical axes at an angle to each other can be spliced to simulate the rolling fingerprint of a single finger;
  • the image acquisition devices with substantially parallel axes select an image acquisition device with a suitable position for collecting fingerprint images;
  • three image acquisition devices with substantially parallel optical axes can be used to collect palmprint images respectively, and the collected palmprints The images are stitched together to obtain a complete palm print.
  • A3 Multiple (for example, 2) image capture devices with partially overlapping depth-of-field ranges can be provided, and one image capture device for preview can optionally be provided.
  • the above-mentioned image capture means with partially overlapping depth-of-field ranges and optional preview image capture means can be used to capture two thumbs and four fingers. When capturing double thumbs and four fingers, images captured by an image capturing device with a suitable height can be selected from multiple image capturing devices with partially overlapping depth-of-field ranges for processing.
  • one or more image acquisition devices include a plurality of common image acquisition devices that jointly photograph the same subject, perform fusion processing on the illumination light image and transformation processing based on the structured light image, and obtain the processed handprint image , including: performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image; the shooting objects include single finger, flat Palm or side palm.
  • Images collected by multiple common image collection devices can be spliced together to obtain a complete handprint. Stitching can be achieved using various image stitching schemes described below. Exemplarily, in an application scenario where it is necessary to obtain a single-finger rolling fingerprint or obtain a palm print, image stitching can be performed.
  • the fusion, transformation processing and splicing of the images can be performed in various orders, and the splicing must be performed on the transformed images.
  • the execution order of the fusion, transformation and splicing steps is exemplified below.
  • performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image may include: For multiple Each image acquisition device in the common image acquisition device performs fusion processing on the blue image and the green image in at least one illumination light image corresponding to the image acquisition device to obtain a fusion image corresponding to the image acquisition device; based on the image acquisition The structured light image corresponding to the device is transformed into the fusion image corresponding to the image acquisition device to obtain the transformation fusion image corresponding to the image acquisition device; the transformation fusion images corresponding to multiple common image acquisition devices are spliced to obtain the processed Post-print image.
  • This scheme belongs to the scheme of blue-green fusion first, then transformation, and then splicing.
  • splicing BG1' taken by C1 and BG1' taken by C2 to obtain the processed handprint image.
  • performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image may include: For multiple Each image acquisition device in the common image acquisition device, based on the structured light image corresponding to the image acquisition device, performs conversion processing on the blue image in at least one illumination light image corresponding to the image acquisition device to obtain the image acquisition device For the corresponding converted blue image, based on the structured light image corresponding to the image acquisition device, the green image in at least one illumination light image corresponding to the image acquisition device is converted to obtain the converted green image corresponding to the image acquisition device, Perform blue-green fusion processing on the converted blue image corresponding to the image acquisition device and the converted green image corresponding to the image acquisition device to obtain a transformed fusion image corresponding to the image acquisition device; transform and fuse images corresponding to multiple common image acquisition devices Splicing is performed to obtain the processed handprint image.
  • This scheme belongs to the scheme of transforming first, then blending blue and green, and then splicing.
  • Green fusion to obtain BG1' taken by C1, and BG1' taken by C2 are also obtained, and BG1' taken by C1 and BG1' taken by C2 are spliced to obtain a processed handprint image.
  • performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image may include: For multiple Each image acquisition device in the common image acquisition device, based on the structured light image corresponding to the image acquisition device, performs conversion processing on the blue image in at least one illumination light image corresponding to the image acquisition device to obtain the image acquisition device For the corresponding converted blue image, based on the structured light image corresponding to the image acquisition device, the green image in at least one illumination light image corresponding to the image acquisition device is converted to obtain the converted green image corresponding to the image acquisition device; Splicing the converted blue images corresponding to a plurality of common image acquisition devices to obtain a spliced and transformed blue image; splicing the transformed green images corresponding to a plurality of image acquisition devices to obtain a spliced and transformed green image; splicing and transforming the blue images and The green image is spliced and transformed for fusion processing, and
  • This scheme belongs to the scheme of transforming first, then splicing, and then blending blue and green. Compared with the previous two schemes, this scheme is preferable because the number of images to be fused is relatively small and the used The image resolution is also lower (the original image resolution is higher), which can effectively reduce the calculation amount of blue-green fusion.
  • the structured light projector includes a light source, a pattern generating unit and a converging lens, the pattern generating unit is arranged in front of the light source, and the light source is used to project the pattern on the pattern generating unit onto the projection plane to form a reconstruction structure on the projection plane
  • the light pattern and the spliced structured light pattern, the converging lens is arranged on the light transmission path between the pattern generating unit and the projection plane; the reconstructed structured light pattern is different from the spliced structured light pattern, the reconstructed structured light unit and the spliced structure in the reconstructed structured light pattern
  • the reconstructed structured light unit is used to perform transformation processing based on the structured light image, and the spliced structured light unit is used to transform and process the image corresponding to multiple common image acquisition devices to splice.
  • the reconstruction transformation and stitching can both use the reconstruction structured light unit, or use different structured light units for reconstruction transformation and stitching.
  • the reconstructed structured light unit and the stitched structured light unit meet at least one of the following conditions: the reconstructed structured light unit includes stripes; the stitched structured light unit includes scattered points; the distribution density of the stitched structured light unit is greater than the distribution of the reconstructed structured light unit density.
  • reconstructed structured light units with a smaller distribution density can be used for reconstruction and transformation to improve the reconstruction speed
  • stitched structured light units with a larger distribution density can be used for fine splicing locally to improve splicing accuracy.
  • a plurality of common image capture devices include a first common image capture device, a second common image capture device and a third common image capture device, the optical axis of the first common image capture device and the orientation of the handprint capture area are multiple
  • the plane of the common image acquisition device is vertical
  • the optical axis of the second common image acquisition device forms a first preset angle with the optical axis of the first common image acquisition device
  • the optical axis of the device forms a second preset included angle; the shooting object includes a single finger.
  • This scheme may correspond to the setup scheme A1 of the image acquisition device described above.
  • the first common image capture device, the second common image capture device and the third common image capture device can be applied to a scene where it is necessary to obtain a single-finger rolling fingerprint.
  • the first common image capture device, the second common image capture device and the third common image capture device may be respectively the first image capture device, the second image capture device and the third image capture device described below.
  • the first common image capture device, the second common image capture device and the third common image capture device, these three may be a plurality of fifth image capture devices 1830 described below or include a plurality of fifth image capture devices described below In 1830.
  • the plurality of common image capture devices include a fourth common image capture device, a fifth common image capture device and a sixth common image capture device; the fourth common image capture device, the fifth common image capture device and the sixth common image capture device
  • the lens of the collection device is located in a predetermined plane and is respectively aimed at a plurality of first sub-regions in the handprint collection area, and any adjacent two of the plurality of first sub-regions overlap or adjoin each other; the shooting objects include flat palms or side palm.
  • This scheme may correspond to the arrangement scheme A2 of the above-mentioned image acquisition device.
  • the fourth common image acquisition device, the fifth common image acquisition device and the sixth common image acquisition device can be applied to scenarios where palmprints need to be acquired.
  • the handprint collection device is only equipped with the fourth common image collection device, the fifth common image collection device and the sixth common image collection device, the handprint collection device can collect palmprints, but cannot be used to collect single-finger simulated rolling fingerprints.
  • the first common image acquisition device, the second common image acquisition device and the third common image acquisition device are further arranged, it is possible to collect single-finger rolling fingerprints and collect palm prints (and fingerprints).
  • the optical axes of the fourth common image capture device, the fifth common image capture device and the sixth common image capture device may be arranged parallel to each other, and the three may be located in the same predetermined plane.
  • the image acquisition ranges (field of view) of two adjacent image acquisition devices among the fourth common image acquisition device, the fifth common image acquisition device and the sixth common image acquisition device may partially overlap or adjoin.
  • Each image acquisition device in the fourth common image acquisition device, the fifth common image acquisition device and the sixth common image acquisition device can acquire images of at least part of the area in the handprint acquisition area, so that the fourth common image acquisition device, the fifth common image acquisition device
  • the common image acquisition device and the sixth common image acquisition device can cooperate with each other to acquire images in the entire handprint acquisition area.
  • the fourth common image capture device, the fifth common image capture device and the sixth common image capture device, these three can be a plurality of fourth image capture devices 1820 described below or include multiple fourth image capture devices described below In 1820.
  • the one or more image acquisition devices include a plurality of independent image acquisition devices that each shoot the same object, the clearly imageable object surface subspaces of the plurality of independent image acquisition devices partially overlap, and the processing device is also used to
  • the structured light image and/or illumination light image collected by the image acquisition device is determined from a plurality of independent image acquisition devices, and the target image acquisition device that can clearly image the subspace of the object surface and match the position of the photographed object is determined;
  • the image used for fusion processing is obtained by Captured by the target image capture device;
  • the shooting object includes two thumbs or four fingers;
  • the preview image capture device is all image capture devices among multiple independent image capture devices, or the image capture device with the largest field of view among multiple independent image capture devices, or An image capture device that is different from multiple independent image capture devices and has a shorter focal length than the multiple independent image capture devices.
  • Multiple (for example, 3) common image acquisition devices can respectively capture a part of fingers or palms, and the images captured by multiple common image acquisition devices can be spliced to obtain a complete handprint.
  • the application scenario of multiple independent image capture devices may be the capture scenario for two thumbs or four fingers. It is understandable that the scene of collecting two thumbs or four fingers means that the shooting object is two thumbs or four fingers, and the images of two thumbs or four fingers need to be collected at the same time, and the processed handprint image can be Two single-finger images for two thumbs or four single-finger images for four fingers.
  • the position, height and angle of each finger can be placed in the handprint collection area more flexibly. , height, and angle will have many restrictions, which will reduce the user experience. Therefore, it is hoped that a greater depth of field/field of view can be obtained through the combination of multiple image acquisition devices.
  • the image acquisition device corresponding to the position is selected for acquisition, which can reduce the need for the user's hand placement, height, Angle restrictions.
  • the images collected by an image acquisition device are processed subsequently (without the need for multi-camera stitching) to obtain the final handprint image. For example, using a single finger as a camera selection unit, when four fingers are used together corresponding to multiple independent image acquisition devices, a larger depth of field/field of view can be obtained.
  • the clearly imageable object plane subspaces overlap in a direction parallel to the optical axis of the image acquisition device (for example, longitudinally), a greater depth of field can be obtained through multiple independent image acquisition devices.
  • the clearly imageable object surface subspaces overlap in a direction perpendicular to the optical axis of the image acquisition device (for example, laterally), a larger field of view can be obtained through multiple independent image acquisition devices.
  • the clearly imageable object surface subspace overlaps in a direction (such as longitudinal direction) parallel to the optical axis of the image acquisition device the "object position ” can be the height of the subject.
  • the clearly imageable object surface subspace overlaps in the direction (such as lateral direction) perpendicular to the optical axis of the image acquisition device, the "object position ” can be the horizontal position of the subject.
  • multiple independent image acquisition devices can be used for image acquisition at the same time (in this case, the multiple independent image acquisition devices are all preview image acquisition devices), and the target image acquisition device corresponding to each camera selection unit is determined according to the acquired images , use the image captured by the target image capture device for subsequent processing; it is also possible to use one of multiple independent image capture devices as a preview image capture device for image capture, and determine the target image capture corresponding to each camera selection unit according to the captured image device, use the image captured by the target image capture device (if the target capture device is not a preview image capture device, then use the target capture device to re-acquire) for subsequent processing; it can also be performed with a preview image capture device other than multiple independent image capture devices
  • Image acquisition determine the target image acquisition device corresponding to each camera selection unit according to the acquired image, use the target image acquisition device to perform image acquisition, and use the image acquired by the target image acquisition device to perform subsequent processing.
  • the fingerprints of double thumbs and four fingers can be collected, and the fingerprints and palm prints of single finger rolling can also be collected.
  • the common image acquisition device adopted when collecting palm prints can be used as a collection Four-finger independent image acquisition device to use.
  • the plurality of image acquisition devices includes a plurality of first independent image acquisition devices; the lenses of the plurality of first independent image acquisition devices are arranged around the center line and face the handprint acquisition area, wherein the plurality of seventh first independent images
  • the acquisition devices respectively have clear imageable object surface subspaces within the range of depth of field from the front and back of the best object, and the clear imageable object surface subspaces of multiple first independent image acquisition devices partially overlap, and the multiple first independent image acquisition devices
  • the clear imageable object surface subspaces together form the clear imageable total space
  • the handprint collection area includes the clear imageable object surface total space
  • the best object planes of multiple first independent image acquisition devices have different positions in the handprint collection area ; the focal lengths of the lenses of the multiple first independent image capture devices are different; and/or, the distances from the front ends of the lenses of the multiple first independent image capture devices to the handprint collection area are different.
  • the first independent image acquisition device may be the seventh image acquisition device 2220 described below, that is, the plurality of first independent image acquisition devices may be the plurality of seventh image acquisition devices 2220 described below.
  • the preview image acquisition device When the preview image acquisition device is the image acquisition device with the largest field of view among the plurality of independent image acquisition devices, it may be the image acquisition device with the shortest focal length among the plurality of independent image acquisition devices. If the focal lengths of the plurality of independent image acquisition devices are the same, the preview image acquisition device may be any image acquisition device or all image acquisition devices in the plurality of independent image acquisition devices.
  • the target image acquisition device that can clearly image the subspace of the object surface and match the position of the photographed object is determined from multiple image acquisition devices, including:
  • the image determines the height of each finger included in the photographed object; according to the height of each finger, a target image acquisition device that can clearly image the subspace of the object surface and matches the height of each finger is determined.
  • the height of the entire four fingers can be determined (the camera selection unit is a whole of four fingers at this time), and a suitable image acquisition device is determined as the target image acquisition device according to the height of the four fingers.
  • the height of each finger can also be determined (the camera selection unit is a single hand at this time), and a different image acquisition device is determined for each finger as the target image acquisition device.
  • the plurality of image acquisition devices include a plurality of eighth image acquisition devices, the lenses of the plurality of eighth image acquisition devices are located in a predetermined plane and are respectively aimed at the plurality of first sub-regions in the handprint acquisition area, and the plurality of Any adjacent two of the first subregions overlap or adjoin each other.
  • the preview image acquisition device is the image acquisition device with the largest field of view among the multiple independent image acquisition devices, if the multiple independent image acquisition devices have the same field of view, the preview image acquisition device can be any one of the multiple independent image acquisition devices or All image capture devices.
  • the preview image acquisition device determine the target map that can clearly image the subspace of the object surface and match the position of the shooting object from multiple image acquisition devices
  • acquisition devices including:
  • the little finger is only captured by one image acquisition device, then this image acquisition device is used as the target image acquisition device for the little finger If the middle finger is captured by two image acquisition devices, then select the image acquisition device whose middle finger position is more centered in the images taken by the two image acquisition devices, the integrity of the middle finger captured or the middle finger posture is better as the target image acquisition device .
  • the horizontal position of the entire four fingers can be determined, and a suitable image acquisition device is determined as the target image acquisition device according to the horizontal position of the four fingers.
  • the horizontal position of each finger may also be determined, and a different image acquisition device is determined for each finger as the target image acquisition device.
  • embodiments of the present application provide a structured light projector and an image acquisition system.
  • the structured light emitted by the structured light projector can form a variety of different structured light patterns.
  • different structured light patterns can be projected on the surface of the target object (that is, the three-dimensional target described in this paper), and the two-dimensional target image of the target object can be obtained based on different structured light patterns.
  • the conversion relationship between the data to be spliced (such as a 2D expanded image or a 3D target model) and the precise alignment relationship between the data to be spliced, and based on the precise alignment relationship, the splicing between the data to be spliced is automatically obtained.
  • Data for example, an overall expanded image after stitching multiple 2D expanded images or an overall target model after stitching multiple 3D target models).
  • the system can obtain high-precision overall data without much human intervention.
  • the image acquisition system is not limited thereto, and it can be applied to any type of target Image collection, the target includes but not limited to fingerprints, palm prints, faces, etc.
  • the image collection system may be a handprint collection system for collecting fingerprints and/or palmprints and the like.
  • the "handprint collection system” described herein may also be referred to as “handprint collection device/equipment” or “non-contact handprint collection device/equipment/system", etc.
  • a structured light projector includes A light source, a pattern generating unit and a converging lens, the pattern generating unit is arranged in front of the light source, the light source is used to project the pattern on the pattern generating unit onto the projection plane, and the converging lens is arranged on the light transmission path between the pattern generating unit and the projection plane Above; the structured light beam emitted by the structured light projector forms a reconstructed structured light pattern and a spliced structured light pattern on the projection plane.
  • the reconstructed structured light pattern is different from the spliced structured light pattern.
  • the reconstructed structured light unit and the spliced structured light There is no border overlap between the tiled structured light units in the pattern.
  • FIG. 1a shows a schematic structure diagram of a structured light projector 100 according to an embodiment of the present application.
  • the structured light projector 100 includes a light source 110 , a pattern generating unit 120 and a converging lens 130 in sequence.
  • the projection plane 140 is in front of the converging lens 130 .
  • the light source 110 can project the pattern on the pattern generating unit 120 onto the projection plane 140 through the converging lens 130 .
  • the structured light beam emitted by the structured light projector 100 finally forms a specific structured light pattern on the projection plane 140 .
  • the light source 110 may be a point light source, for example, may include a single light emitting element.
  • the light emitting element may be a light emitting diode (LED) or a laser diode that emits laser light, or the like.
  • the light emitting element may emit laser light having a wavelength ranging from about 850 nm to about 940 nm, or may emit light having a near-infrared band or a visible band.
  • the light emitting element may emit red light with a wavelength of 660 nm as structured light.
  • the wavelength of light emitted by the light emitting element is not limited to a specific wavelength.
  • the light source 110 may also include a light emitting element and other components, such as a lampshade, a lamp panel, and a laser emitter capable of emitting structured light.
  • the pattern generating unit 120 is disposed in front of the light source 110 .
  • the pattern generation unit 120 may generate a specific pattern. After the light emitted by the light source 110 is irradiated on the pattern generating unit 120, a part of it is blocked or the transmission direction is changed, while the other part is transmitted through the pattern generating unit 120 according to a preset angle, thereby forming a desired pattern on the transmission plane 140. spot.
  • the condensing lens 130 is disposed on a light transmission path between the pattern generating unit 120 and the projection plane 140 .
  • the projection plane 140 can be regarded as an ideal plane, that is, a plane on which the structured light can be projected when no target object is placed. It can be understood that when a target object (such as a finger) is placed on the projection plane 140 , the structured light emitted by the structured light projector 100 is irradiated on the target object, causing a certain optical distortion in the formed structured light pattern.
  • the light passing through the pattern generating unit 120 is converged by the converging lens 130 and reaches the projection plane 140 .
  • the converging lens 130 may include various types of convex lenses, concave lenses or other forms of lenses, or a combination of various lenses.
  • the application is not limited to the converging lens 130 , as long as the converging lens 130 as a whole has a converging effect on light, and can form a clear pattern on the projection plane 140 after converging the light passing through the pattern generating unit 120 .
  • the distance between the light source 110 and the pattern generating unit 120 may be set as a first preset distance, and the distance between the pattern generating unit 120 and the converging lens 130 may be set as a second preset distance.
  • the first preset distance and the second preset distance can be any suitable distance, both May or may not be equal.
  • the pattern generating unit 120 and the converging lens 130 can also be arranged according to a preset angle. For example, the plane where the pattern generating unit 120 is located can be placed at a preset angle relative to the optical axis of the converging lens 130 .
  • the preset angle may be any suitable angle, such as 90° or 45°.
  • the plane where the pattern generating unit 120 is located may be a plane where a longitudinal section of the pattern generating unit 120 is located. It can be understood that when the thickness of the pattern generating unit 120 is relatively thin, for example, when it is a grating, the pattern generating unit 120 is basically on the same plane as its longitudinal section.
  • the present application does not limit the first preset distance, the second preset distance and the preset angle here, as long as the configured structured light projector 100 can form a specific structured light pattern on the projection plane 140 .
  • the structured light pattern formed by the structured light projector 100 on the projection plane 140 includes a reconstructed structured light pattern and a spliced structured light pattern.
  • FIGs. 1b-1d schematic diagrams of structured light patterns according to embodiments of the present application are shown.
  • 1b can be regarded as a structured light pattern formed by the structured light beam emitted by the structured light projector 100 in FIG. 1a on the projection plane 140, the structured light pattern is composed of a reconstructed structured light pattern and a stitched structured light pattern.
  • FIG. 1c shows a reconstructed structured light pattern 150 corresponding to the structured light pattern in FIG. 1b
  • FIG. 1d shows a mosaic structured light pattern 160 corresponding to the structured light pattern in FIG. 1b .
  • the reconstructed structured light pattern is different from the spliced structured light pattern.
  • the difference between the reconstructed structured light pattern and the spliced structured light pattern may include that the two patterns are different in shape.
  • the reconstructed structured light pattern 150 shown in FIG. 1c is different from the mosaic structured light pattern 160 shown in FIG. 1d.
  • the reconstructed structured light pattern 150 is composed of multiple reconstructed structured light units 151 with similar shapes, while the stitched structured light pattern 160 is composed of multiple stitched structured light units 161 with different shapes from the reconstructed structured light units 151 .
  • the reconstructed structured light unit 151 or the spliced structured light unit 161 shown in FIGS. 1c-1d is only one of many.
  • the reconstructed structured light unit may have one form or multiple forms.
  • the form of the mosaic structured light unit may be one or multiple.
  • the difference between the reconstructed structured light pattern and the spliced structured light pattern may further include different uses of the two.
  • the reconstructed structured light pattern can be used to assist in determining the coordinates of the target object in three-dimensional space and the coordinates in the collected two-dimensional target image
  • the spliced structured light pattern can be used for the alignment between multiple data to be spliced (which can be two-dimensional unfolded images or three-dimensional target models) for the same target object to assist in the precise splicing of the data to be spliced, and then Acquire high-precision overall data.
  • the reconstructed structured light unit 152 in the form of stripes, the square stitched structured light unit 162 , and the circular stitched structured light unit 163 together constitute the structured light pattern in the figure.
  • the boundary of each reconstructed structured light unit in stripe form can be regarded as the upper and lower sides of a rectangle, and the square stitched structured light unit 162 and the circular stitched structured light unit 163 are respectively connected with the reconstructed structured light unit in stripe form. 152 up and down No two edges intersect.
  • the structured light emitted by the light source is projected on the surface of the target object through the pattern generating unit and the converging lens to form different reconstructed image patterns and stitched image patterns.
  • a specific reconstructed image pattern can be used to determine the transformation relationship between the 2D target image of the target object and the corresponding data to be stitched, and the stitched image pattern can be used to achieve alignment between the data to be stitched, so a variety of Different structured light patterns help stitching to obtain more accurate overall data.
  • the pattern generating unit 120 includes a diffractive optical element or a grating.
  • the pattern generation unit 120 may include any existing or future elements or element groups capable of generating specific structured light patterns.
  • the pattern generating unit may be a diffractive optical element (Diffractive Optical Element, DOE).
  • DOE usually uses a micro-nano etching process to form two-dimensionally distributed diffraction units.
  • Each diffraction unit can have a specific shape, refractive index, etc., so it can fine-tune the phase distribution of structured light such as laser light.
  • the laser light diffracts after passing through each diffraction unit, and interferes at a certain distance to form a specific light intensity distribution, which can form a specific structured light pattern when projected on the projection plane.
  • a DOE for beam shaping ie, a DOE for beam shaping
  • a diffractive beam splitter can also be used to generate a one-dimensional or two-dimensional array beam so as to form a regular array of scattered points on the surface of the target object.
  • the pattern generating unit 120 may be a grating, such as a diffraction grating.
  • Diffraction gratings can periodically modulate the amplitude or phase (or both) of incident light through regular structures.
  • the grating has a pattern. When the light emitted by the light source hits the grating, part of it is blocked and the other part is emitted through the grating, thus forming a light spot with a desired pattern on the surface of the target object.
  • the grating can be made of various suitable materials.
  • the grating may be made of a material with better light transmission, such as a material based on glass or plastic.
  • a material with better light transmission such as a material based on glass or plastic.
  • the grating may include a film on which the reconstructed structured light pattern and the stitched structured light pattern are formed.
  • the preset pattern may be scaled to a suitable size according to a certain ratio, and then formed on the film by means such as spraying or printing.
  • FIG. 1b if one wants to form a pattern as shown in FIG. Structured light pattern to achieve.
  • the grating includes film, users can set any desired reconstructed structured light pattern and spliced structured light pattern according to their needs, so it can meet the personalized needs of users.
  • the reconstructed structured light unit and the spliced structured light unit can be set to multiple kind of form.
  • the reconstructed structured light unit includes stripes; and/or, the spliced structured light unit includes scattered points.
  • the stripes may be horizontal stripes, vertical stripes, or oblique stripes.
  • the scatter points may include various forms, such as circular scatter points, square scatter points, triangular scatter points, cross-shaped scatter points, and the like.
  • the same spliced structured light pattern may also include a combination of multiple forms of spliced structured light units, such as a combination of square scatter points and cross-shaped scatter points.
  • the spliced structured light unit shown in FIG. 1 b includes square scatter points and circular scatter points.
  • each reconstructed structured light unit may be arranged according to a fixed rule.
  • stripes of equal width may be arranged at equal intervals, or stripes with different widths may be arranged at different intervals.
  • the reconstructed structured light pattern shown in FIG. 1b is a combination of stripes, and the width of each stripe is not completely the same, and the distance between two adjacent stripes is also not completely the same.
  • the spliced structured light units in the spliced structured light pattern can also be arranged according to a fixed rule.
  • the shape and arrangement of the reconstructed structured light units in the reconstructed structured light pattern and the spliced structured light units in the spliced structured light pattern are determined by the pattern generation unit 120 .
  • the pattern generation unit 120 is a DOE
  • the position and angle of the DOE in the light propagation path projected on the projection plane by the DOE, as well as the microstructure of the DOE surface can affect the reconstructed structured light pattern and the stitched structured light projected on the projection plane. pattern.
  • the stripes have a certain width and length, which is convenient for identification and positioning, and it is also convenient to determine the transformation relationship between the two-dimensional target image of the target object and the corresponding data to be spliced based on it.
  • the distribution of scattered points is relatively dense and the number is large, which is convenient for matching, and then facilitates the alignment of the data to be spliced.
  • the distribution density of the spliced structured light units is greater than the distribution density of the reconstructed structured light units.
  • the distribution density of the reconstructed structured light unit may be represented by the distribution density of the marking part on the reconstructed structured light unit (the marking part is at least a part of the reconstructed structured light unit). For example, in the case where the reconstructed structured light unit is a fringe, the distribution density of the midline of each fringe may be used to represent the distribution density of the fringes.
  • the distribution density of the spliced structured light unit may be represented by the distribution density of the marking portion on the spliced structured light unit (the marking portion is at least a part of the spliced structured light unit). For example, when the spliced structured light unit is a scatter point, the distribution density of the central points on each scatter point (which usually has a certain area) may be used to represent the distribution density of the scatter points.
  • the distribution density of the spliced structured light units cannot be too small, otherwise sufficient splicing accuracy cannot be obtained; the distribution density of the spliced structured light units should not be too large, if the distribution density is greater than the deformation error, it is easy to be wrongly matched. For example, referring to FIG. 1b, the distribution density of scattered points is greater than that of stripes.
  • an image acquisition system includes a structured light projector, an illumination system, one or more image acquisition devices and a processing device.
  • the lighting system includes one or more light sources, each light source is used to emit light toward a target collection area, and the target collection area is used to place a three-dimensional target.
  • One or more image acquisition devices are used in the illumination system to target The image of the target acquisition area is acquired while the acquisition area emits light to obtain the target image.
  • the processing device is used for processing the target image collected by one or more image collecting devices.
  • the three-dimensional target may be, for example, the whole human body or a part of the human body, such as human face, fingers, palms, etc., or other objects with three-dimensional structures that need to be collected.
  • the image acquisition system may be an image acquisition system for any suitable target object with a three-dimensional shape, such as a human body, a human body part, a three-dimensional still life or an animal, etc., which is not limited here.
  • the target collection area is a handprint collection area
  • the three-dimensional target is part or all of the user's hand. For the sake of simplification, the following uses a handprint collection system for collecting fingerprints of fingers as an example for illustration.
  • FIG. 2 shows a schematic diagram of an image acquisition system 200 and related target objects 400 and target acquisition areas 300 according to an embodiment of the present application. It can be understood that FIG. 2 is a main view effect diagram for the image acquisition system.
  • the image collection system 200 shown in FIG. 2 is a handprint collection system 200
  • the target object 400 is a finger 400
  • the target collection area 300 is a handprint collection area 300 .
  • the structure of the handprint collection system 200 shown in FIG. 2 is only an example and not a limitation to the application, and the handprint collection system 200 according to the embodiment of the application is not limited to the situation shown in FIG. 2 .
  • the handprint collection system 200 shown in FIG. 2 there is one structured light projector 100 , three image collection devices, and six light sources in the lighting system. It should be understood that the above number is only an example, and it may be any suitable number, which is not limited in this embodiment of the present application.
  • the optical axes of the three image acquisition devices 212 , 214 , and 216 shown in FIG. 2 are located in the same plane, and they are respectively facing the target acquisition area 300 from three angles.
  • the optical axis of the image acquisition device 214 located in the middle basically coincides with the central axis 302 of the target acquisition area 300, while the optical axes of the image acquisition devices 212 and 216 on both sides are in a certain clip with the central axis 302 of the target acquisition area 300. horn.
  • the relative distances and included angles between the three image acquisition devices 212 , 214 , 216 , and between each of them and the target acquisition area 300 can be set as required, and there is no limitation here.
  • the structured light projector emits structured light for coding.
  • the structured light projector 100 shown in FIG. 2 and the image acquisition device 214 in the middle may be located on different planes to avoid mutual interference.
  • the structured light projector 100 may also be located on the same plane as the image acquisition device 214 .
  • the structured light projector 100 can be arranged at any suitable position as required, as long as it does not hinder the imaging of the image acquisition device and can illuminate the handprint acquisition area.
  • the structured light emitted by the structured light projector 100 is projected on the surface of the target object to form a reconstructed image pattern corresponding to the reconstructed structured light pattern and a stitched image pattern corresponding to the stitched structured light pattern.
  • the reconstructed image pattern can be used to determine the transformation relationship between the two-dimensional target image of the target object and the corresponding data to be stitched
  • the stitched image pattern can be used to achieve alignment between the data to be stitched. Therefore, by using the image acquisition system including the structured light projector 100 , alignment and registration between the data to be spliced can be realized, which is helpful for splicing to obtain overall data with higher precision.
  • FIG. 3 shows a schematic diagram of a mosaic image obtained by directly superimposing and stitching multiple fingerprint images together in the prior art. Referring to Figure 3, it can be seen that there are obvious seams at the image splicing. Such images will directly affect subsequent fingerprint identification and reduce the accuracy of identification.
  • an embodiment of the present application provides an image stitching method.
  • the three-dimensional object is irradiated with illumination light to obtain information such as the texture of the three-dimensional object (such as fingerprint information), and at the same time, the three-dimensional object is illuminated with structured light to project a structured light pattern on it.
  • the image pattern formed after projecting the structured light pattern can assist in obtaining the data to be spliced and aligning the data to be spliced, so as to splice the data to be spliced more accurately.
  • the image mosaic method according to the embodiment of the present application can be applied to the mosaic of images acquired for any type of target acquisition .
  • the image stitching method according to the embodiment of the present application may be executed by the processing device 240 in the image acquisition system 200 .
  • Fig. 4 shows a schematic flowchart of an image mosaic method 400 according to an embodiment of the present application.
  • the image mosaic method 400 includes steps S410 , S420 , S430 , S440 and S450 .
  • Step S410 acquiring a third target image and a fourth target image, wherein the target image includes a third target image and a fourth target image, and the third target image and the fourth target image are collected by emitting light toward the target collection area while aiming at The target acquisition area acquires images in a manner, the acquisition light includes structured light and illumination light, and the beam of the structured light can form a reconstructed structured light pattern and a spliced structured light pattern; the third target image and the fourth target image correspond to the The acquisition ranges partially overlap.
  • the target collection area may be a physical space area with any suitable shape, such as a spherical area, a cubic area, or any other regular or irregular area.
  • lighting and image acquisition can be performed on the target acquisition area through the above-mentioned image acquisition system.
  • the image acquisition system may include one or more image acquisition devices, configured to perform image acquisition on the target acquisition area.
  • the imaging range of all image acquisition devices in one or more image acquisition devices can cover the entire target acquisition area, so as to facilitate the acquisition of images of the entire target acquisition area. picture.
  • the image acquisition device can acquire an image of the target, such as a fingerprint or palmprint image of the user.
  • an image acquisition system may include a single image acquisition device.
  • the target images at multiple angles may be collected by rotating and/or moving the image collection device.
  • the image capture system may include multiple image capture devices.
  • multiple image capture devices may be used to capture target images from multiple angles. That is, the optical axes of any two image capture devices among the plurality of image capture devices may form a preset angle with each other, and the preset angle is not equal to 0, so as to collect target images at different angles. Therefore, regardless of using single or multiple image acquisition devices, corresponding target images can be acquired for different parts (ie, different angles) of the three-dimensional target.
  • the third target image and the fourth target image may be images acquired by the above-mentioned one or more image capture devices for image capture of different parts of the three-dimensional target.
  • the third target image and the fourth target image are partially overlapped corresponding to the acquisition range on the target acquisition area.
  • the third target image and the fourth target image may be images captured by two adjacent image capture devices.
  • the image acquisition system may include a preset number of cameras arranged in preset positions.
  • the preset number is 3, and the image acquisition system includes a first camera (hereinafter referred to as C1), a second camera (hereinafter referred to as C2), and a third camera (hereinafter referred to as C3).
  • C1 , C2 , and C3 may be arranged at a preset angle on the same plane, or may be arranged at a preset angle on different planes. It can be understood that in order to obtain images of different parts of the three-dimensional object from different positions and angles, the three cameras can be positioned and arranged in a corresponding manner. Those of ordinary skill in the art can easily understand the principle, and details will not be repeated here. Moreover, the images acquired by every two adjacent cameras correspond to the partial overlap of the acquisition range on the target acquisition area.
  • the target image I1 and target image I2 acquired at the same time or at different times correspond to the partial overlap of the acquisition range on the target acquisition area; for C2 and C3 adjacent to each other , the target image I2 and the target image I3 respectively acquired at the same time or at different times are also partially overlapped corresponding to the acquisition ranges on the target acquisition area.
  • the "third target image” and "fourth target image” acquired in step S410 may be any two images to be spliced.
  • the third target image and the fourth target image are I1 and I2 respectively;
  • the third target image and the fourth target image are respectively I2 and I3.
  • the image acquisition system may also include a device capable of emitting collected light, such as an illumination system and a structured light projector.
  • a device capable of emitting collected light such as an illumination system and a structured light projector.
  • the image collection system may be a three-dimensional body scanner; for fingerprint collection, the image collection system may be a non-contact 3D fingerprint collection device.
  • the collected light includes structured light and illumination light.
  • Structured Light can be projected onto the surface of the measured object (this article is a three-dimensional target) through a structured light projector Active structure information, such as laser stripes, Gray codes, sinusoidal stripes, etc.
  • Structured light projectors such as infrared laser emitters, emit specific patterns of near-infrared light.
  • the illuminating light may be visible light meeting preset lighting conditions, so that the third target image and the fourth target image meeting preset quality requirements can be acquired by the image acquisition device.
  • the beam of structured light can form a reconstructed structured light pattern and a spliced structured light pattern, and the reconstructed structured light pattern and the spliced structured light pattern are different.
  • Projecting the reconstructed structured light pattern on the surface of the 3D target can be used to reconstruct the body surface shape of the 3D target to obtain the data to be spliced;
  • Projecting the spliced structured light pattern on the surface of the 3D target can be used for multiple target images of the 3D target Alignment and splicing synthesis of the corresponding data to be spliced.
  • the reconstructed structured light pattern and the spliced structured light pattern are projected onto the 3D target, some optical distortions will usually occur, which can be found and reconstructed in the structured light image corresponding to the collected target image.
  • the light patterns respectively correspond to the image patterns.
  • the reconstructed structured light pattern may be a fringe pattern formed by structured light projected by a structured light projector in the handprint collection system.
  • the stripe pattern may include stripes having a preset width.
  • the width of each stripe can be completely the same, such as 0.1 mm wide; or not completely the same, for example, the stripes including three kinds of widths are evenly spaced in sequence.
  • Stripe patterns can also include bright and dark stripes.
  • the reconstructed structured light pattern is composed of a plurality of reconstructed structured light units, and for an example where the reconstructed structured light pattern is a stripe pattern, the reconstructed structured light unit is each stripe.
  • Fig. 5 shows an example of reconstructing a structured light pattern according to an embodiment of the present application.
  • the reconstructed structured light pattern consists of 15 stripes arranged in a certain order, and each stripe is a reconstructed structured light unit, such as the first dark stripe 510, the first bright stripe 520 and the second dark stripe 530 .
  • the width of the first bright stripe 520 is smaller than the width of the first dark stripe 510
  • the width of the first dark stripe 510 is larger than the width of the second dark stripe 530 .
  • the mosaic structured light pattern may be a scatter (or speckle) pattern formed by structured light projected by a structured light projector in an image acquisition system
  • the speckle pattern includes several speckles, and each speckle may It is a spliced structured light unit.
  • the form of the structured light unit has been described above, and will not be repeated here. Since taking pictures is a projection process, speckle will have unpredictable deformation on the picture. Therefore, some special speckle patterns can be used to ensure that the speckle feature points recognized by multiple cameras will not have errors. For example, setting the speckle in the shape of a cross, and using the cross points as feature points for recognition can help increase the accuracy of speckle recognition.
  • Fig. 6 shows an example of a structured light pattern including a mosaic structured light pattern according to an embodiment of the present application.
  • the structured light pattern shown in FIG. 6 which includes a reconstructed structured light pattern composed of stripes, and a spliced structured light pattern composed of square speckles and circular speckles.
  • both the square speckle 610 and the circular speckle 620 can be regarded as a spliced structured light unit. It can be understood that there are 18 reconstructed structured light units and 488 spliced structured light units included in FIG. 6 .
  • the reconstructed structured light unit and the spliced structured light unit may have a certain range overlap. stack.
  • each reconstructed structured light unit may be provided with multiple spliced structured light units. For example, as shown in FIG. 6 , 25 dark speckles are set on the first bright stripe, and 26 bright speckles are set on the second dark stripe.
  • the rectangular boundary of the second dark fringe shown in FIG. 6 does not overlap with the boundary of any bright speckle within its boundary.
  • the distribution density of the spliced structured light units is greater than the distribution density of the reconstructed structured light units, so that the 3D reconstruction can be performed through the reconstructed structured light units with a smaller distribution density, so as to quickly obtain a 3D model, and then use the distributed
  • the spliced structured light units with high density are finely spliced to ensure splicing accuracy.
  • the distribution density of the spliced structured light units cannot be too small, otherwise sufficient splicing accuracy cannot be obtained; the distribution density of the spliced structured light units cannot be too large, if the distribution density is greater than the deformation error, it is easy to be wrongly matched.
  • the distribution density of speckles is greater than that of stripes.
  • Step S420 determining the first structured light image and the first illumination light image corresponding to the third target image; determining the second structured light image and the second illumination light image corresponding to the fourth target image.
  • the first structured light image includes the first reconstructed image pattern corresponding to the reconstructed structured light pattern and the first mosaic image pattern corresponding to the stitched structured light pattern;
  • the second structured light image includes the first reconstructed image pattern corresponding to the reconstructed structured light pattern.
  • both the third target image and the fourth target image may include the following image information: the structured light image information obtained by emitting structured light to the three-dimensional target from the structured light projector, the three-dimensional target image information transmitted by the lighting system Illuminating light image information acquired by the target emitting illuminating light.
  • the structured light image information corresponding to the third target image is represented by the first structured light image
  • the illumination light image information corresponding to the third target image is represented by the first illuminated light image
  • the structured light image information corresponding to the third target image is represented by the first structured light image
  • the second structured light image represents the structured light image information corresponding to the fourth target image
  • the second illuminating light image represents the illuminating light image information corresponding to the fourth target image.
  • the image information of this channel can be directly extracted from any target image, and then the corresponding structured light image can be obtained. If the color of structured light contains multiple color channels, for example, it is yellow structured light, image information of multiple color channels can be extracted from any target image for fusion to obtain a corresponding structured light image.
  • FIGs. 7a-7c there are shown schematic diagrams of three structured light images corresponding to fingerprint collection at three different angles.
  • the three structured light images shown in Fig. 7a-7c are obtained by From left to right, it can be obtained by extracting r-channels from the three target images collected by the three cameras C1, C2, and C3 in the non-contact 3D fingerprint collector.
  • the above three target images are C1, C2, and C3 in the
  • the structured light projectors use red light to emit a structured light pattern similar to that shown in FIG. 6 to a three-dimensional target and collect them respectively.
  • each structured light image contains a reconstructed image pattern corresponding to the reconstructed structured light pattern (bright fringe and dark fringe) shown in FIG. (square speckle and circular speckle) corresponding stitched image pattern.
  • step S420 may further include: for each target image, acquiring an initial structured light image; performing de-interference processing on the initial structured light image to obtain a de-interferenced structured light image. Subsequent processing is performed based on the de-interferenced structured light image in steps such as step S430. Further explanation will be given below with reference to FIG. 8 .
  • a semantic segmentation model (such as a Unet model) may be used to identify nails in the initial structured light image, and remove them as noise points.
  • Fig. 8 shows a schematic diagram of a structured light image after nail removal.
  • the illumination light may include blue light and/or green light or light of other colors.
  • the image information under this channel can be directly extracted from the target image to obtain the corresponding illumination light image.
  • the image information under the blue light channel can be directly extracted to obtain a single-channel illumination light image as a corresponding illumination light image.
  • the image information under each color channel in the multiple color channels can be extracted from the target image first, and the corresponding single-channel illumination light image can be obtained, and then the multiple single-channel illumination The light images are fused to obtain the final illuminated light image.
  • step S420 may further include: for each target image, extracting the color channel of the target image to obtain at least one single-channel illumination light image corresponding to at least one color channel; A single-channel illumination light image is fused to obtain an illumination light image corresponding to the target image, wherein at least one color channel is a color channel included in the illumination light.
  • the image information of blue light and green light channels can be extracted from the target image collected by C1 respectively to obtain the blue light channel illumination light image and The illumination light image of the green light channel, and then fuse the two to obtain the illumination light image corresponding to the target image collected by C1.
  • the illumination light image corresponding to the target image collected by C2 and the illumination light image collected by C3 The illumination light image corresponding to the target image.
  • Step S430 Determine the first transformation relationship based on the reconstruction components contained in the first reconstruction image pattern, and transform the first illumination light image according to the first transformation relationship to obtain the first image to be spliced.
  • Data; the second transformation relationship is determined based on the reconstruction component units contained in the second reconstruction image pattern, and the second illumination light image is transformed according to the second transformation relationship to obtain the second data to be spliced; wherein, the first transformation relationship and the second The transformation relationship is an expansion transformation relationship, the first data to be spliced is the first expansion image, and the second data to be spliced is the second expansion image; or, the transformation corresponding to the first transformation relationship and the second transformation relationship is three-dimensional reconstruction, the first to be spliced
  • the spliced data is the first target model
  • the second data to be spliced is the second target model.
  • optical distortion may occur after the reconstructed structured light pattern formed by the structured light, such as the fringe pattern, is projected on the surface of the three-dimensional object.
  • the reconstructed structured light pattern formed by the structured light emitted by the structured light projector is straight stripes, it can appear arc-like stripes as shown in Figures 7a-7c when projected on the pulp of a finger.
  • the position and shape of each reconstructed component unit (such as stripes) in the reconstructed image pattern can be analyzed based on the optical principle and the inherent parameters and relative positional relationship of the structured light projector and the image acquisition device, and the The illumination light images corresponding to each target image are respectively transformed to obtain the transformed data to be spliced.
  • the first transformation relationship can be determined based on the stripes contained in Figure 7a, and the corresponding illumination light image can be transformed according to the first transformation relationship to obtain the first set of data to be spliced; correspondingly, based on Figure 7b Determine the second transformation relationship based on the stripes contained in , determine the third transformation relationship based on the stripes contained in FIG. 7 c , and further obtain the second group of data to be spliced and the third group of data to be spliced.
  • the first transformation relation is the first transformation relation
  • the second transformation relation is the second A conversion relationship
  • the first data to be spliced is the first group of data to be spliced
  • the second data to be spliced is the second group of data to be spliced.
  • the first transformation relationship and the second transformation relationship are expansion transformation relationships
  • the first data to be spliced is a first expansion image
  • the second data to be spliced is a second expansion image.
  • the first transformation relationship and the second transformation relationship may be a transformation relationship from a two-dimensional illumination light image to a two-dimensional expanded image.
  • each pixel in the two-dimensional illumination light image acquired in step S420 can be directly converted one by one according to the corresponding transformation relationship to obtain new two-dimensional image data. The specific implementation process of this example will be explained in detail below.
  • the transformation corresponding to the first transformation relationship and the second transformation relationship is three-dimensional reconstruction
  • the first data to be spliced is the first target model
  • the second data to be spliced is the second target model.
  • three-dimensional reconstruction may be performed on the target image according to the position and shape of its reconstructed constituent units, so as to obtain the corresponding transformation relationship.
  • the acquired two-dimensional illumination light image is mapped to the reconstructed three-dimensional model according to the transformation relationship, so as to obtain a corresponding three-dimensional target model.
  • Step S440 according to the first mapping position of the first mapping and splicing component unit and the second mapping position of the second mapping and splicing component unit, determine the matching relationship between the first mapping and splicing component unit and the second mapping and splicing component unit;
  • the first map position is used to represent the first map splice group
  • the position of the forming unit, the second mapping position is used to indicate the position of the second mapping and splicing component unit;
  • the first mapping position is equal to the position obtained by performing the operation corresponding to the first transformation relationship on the position of the splicing component unit in the first structured light image
  • the second mapping position is equal to the position obtained by performing an operation corresponding to the second transformation relationship on the position of the mosaic component unit in the second structured light image.
  • each of the circular speckle and the square speckle can be regarded as a mosaic unit. Since the reconstructed structured light pattern and the spliced structured light pattern formed by the structured light beam can be arranged according to a preset rule, there is also a corresponding positional relationship between the reconstructed component unit and the spliced component unit in the first structured light image. In step S430, the first transformation relationship and the second transformation relationship are obtained.
  • the operation corresponding to the first transformation relationship can be performed on the position of the mosaic component unit in the first structured light image, and the corresponding position of the first mapped mosaic component unit can be obtained, that is, the first mapping position; similarly, the second can also be obtained
  • each mapping splicing unit can be regarded as an alignment mark unit for aligning the data to be spliced, and the alignment mark units at approximate positions in the two data to be spliced can be paired, so that in the subsequent splicing process
  • the data to be spliced is spliced based on paired alignment marker units.
  • the matching relationship between the first mapping mosaic component unit and the second mapping mosaic component unit may include a one-to-one correspondence of matching speckle pairs.
  • the Unet model can be used to identify the mosaic constituent units in two structured light images corresponding to two adjacent image acquisition devices, and identify where the mosaic constituent units in each structured light image are located.
  • Position coordinates transform the position coordinates of each stitching component unit according to the expansion transformation relationship, and obtain the position coordinates of each stitching component unit on the image plane where the expanded image is located, thereby obtaining the first mapping of the first mapping stitching component unit
  • the position and the second map are concatenated to form the second map position of the unit.
  • the two structured light images corresponding to two adjacent image acquisition devices can be converted respectively according to the expansion transformation relationship, and the mapping and splicing constituent units are identified in the converted structured light images to obtain each mapping The position coordinates of the mosaic composition unit on the image plane where the expanded image is located, thereby obtaining the first mapping position of the first mapping mosaic composition unit and the second mapping position of the second mapping mosaic composition unit.
  • mapping positions obtained by the above two methods are approximately equal, so "equal" in step S440 can be understood as approximately equal.
  • one-to-one matching can be performed on the mapping mosaic constituent units (the first mapping mosaic constituent unit and the second mapping mosaic constituent unit) that meet the preset distance requirement on the image plane where the unfolded image is located, to obtain a matching pair of mapping mosaic constituent units .
  • the first expanded image and the second expanded image are on the same image plane.
  • step S440 can map each mosaic component unit in each structured light image to the 3D space one by one (the first object model and the second In the space where the target model is located), the mapping position of each mapping splicing component unit is obtained.
  • the mapping positions of the constituent units can be spliced according to the mapping, and the preset distance requirements can be satisfied.
  • the corresponding matching relationship is obtained by splicing the constituent units according to the requested mapping and matching the positions.
  • Step S450 based on the matching relationship, splicing the first data to be spliced and the second data to be spliced to obtain overall data.
  • the above step S440 determines the matching relationship between the first mapping mosaic component unit and the second mapping mosaic component unit. For example, if it is determined that the first mapped mosaic component unit and the second mapped mosaic component unit meet a preset distance requirement, they may be regarded as a group of mapped mosaic component unit pairs.
  • the first mapping position is coordinates (2, 9)
  • the second mapping position is coordinates (3, 9)
  • the two belong to matching mapping and splicing constituent units. Accordingly, a matching relationship between at least part of the first mapping and splicing constituent units and at least part of the second mapping and splicing constituent units can be determined.
  • an alignment relationship between at least some of the pixels in the first data to be spliced and at least some of the pixels in the second data to be spliced can be further determined.
  • the corresponding pixel points in the first data to be stitched and the second data to be stitched can be stitched and synthesized according to the alignment relationship, so as to obtain the overall data.
  • the obtained overall data can be combined with the third data to be spliced.
  • the spliced data is spliced and synthesized according to the corresponding matching relationship, and finally the overall data about the three are obtained.
  • a plurality of target images collected under structured light and illumination light environment are obtained; according to the reconstruction component units in the structured light image corresponding to the target image, the illumination light image corresponding to the target image is transformed, and each The data to be stitched corresponding to the target image; according to the stitching components in the structured light image, multiple data to be stitched are accurately stitched and synthesized to obtain the synthesized overall data.
  • the image mosaic method of the embodiment of the present application uses different components in the structured light image to act on different processing functions, one part is used for data form transformation, and the other part is used for precise alignment between data .
  • the splicing accuracy of the overall data of the target object obtained after processing through this scheme is higher, the splicing effect is better, and it is more friendly to the derived field where the data is applied.
  • the simulated scrolling fingerprint image obtained through this scheme can be compatible with the fingerprint image collected by the traditional printing method, which can greatly improve the accuracy of fingerprint recognition, and then can significantly improve the user experience.
  • the first transformation relationship and the second transformation relationship are expansion transformation relationships
  • the first data to be spliced is a first expansion image
  • the second data to be spliced is a second expansion image.
  • Determining the first transformation relationship based on the reconstructed component units contained in the first reconstructed image pattern may include: performing three-dimensional reconstruction according to the positions of the reconstructed component units contained in the first reconstructed image pattern to obtain a first model; according to the first reconstructed image pattern The position of the reconstructed constituent unit contained in and the first model obtain the first first-order transformation relationship; expand the first model along the common coordinate axis to obtain the first The unfolded position of each point on the model; the first second-order transformation relationship is obtained according to the first model and the unfolded positions of each point on the first model; the first transformation relationship is obtained according to the first first-order transformation relationship and the first second-order transformation relationship.
  • determining the second transformation relationship based on the reconstructed constituent units contained in the second reconstructed image pattern includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the second reconstructed image pattern to obtain a second model, and according to the second reconstructed image The position of the reconstructed unit contained in the pattern and the second model obtain the second first-order transformation relationship; expand the second model along the common coordinate axis to obtain the expanded position of each point on the second model; according to the second model and the second model The expansion position of each point above is used to obtain the second second-order transformation relationship; the second transformation relationship is obtained according to the second first-order transformation relationship and the second second-order transformation relationship.
  • the reconstruction unit in Fig. 8 is a number of bright and dark stripes with different widths, and the first model can be obtained by performing three-dimensional reconstruction according to the position of each stripe in the image coordinate system.
  • FIG. 8 shows a plurality of stripes with different positions and shapes, and each stripe may be marked sequentially according to a preset rule.
  • the structured light image can be input to the neural network used to detect the reconstruction components, and the labels of the reconstruction components included in the structured light image and the positions of the reconstruction components in the image coordinate system can be obtained.
  • the position of the reconstructed component unit in the image coordinate system may be represented by the coordinates of several points distributed with a preset point density on the centerline of the reconstructed component unit in the image coordinate system. It can be understood that the position of the reconstruction unit can also be represented by other points such as the upper and lower edge points of the reconstruction unit.
  • the size of the detected reconstruction unit changes due to the change of the pixel value of the structured light image (the edge tend to change with it), it is more robust to use the points on the midline. It is easy to understand that the number of points acquired on the center line of each stripe may not be exactly the same. Let the coordinates of a point on the midline of each reconstruction constituent unit be (x, y).
  • three-dimensional reconstruction can be performed on each reconstruction component unit through the projection matrix of the camera and the light surface parameters of the structured light.
  • the reconstruction unit is a fringe
  • the three-dimensional coordinates (u, v, w) of each point on the centerline of each fringe can be obtained.
  • p represents the projection matrix of the camera, as a known quantity
  • (u, v, w) represents the 3D coordinates of the point on the fringe center line for 3D reconstruction, as an unknown quantity
  • (x, y) represents the reconstruction in the structured light image
  • is a distance coefficient and taken as an unknown quantity.
  • the normal vector of the light plane where each reconstruction component (for example, fringe) is located may be obtained through a calibration process for each camera.
  • Fig. 9 shows a simple schematic diagram of collecting structured light stripes by a camera.
  • each point on the centerline of the stripe can be solved.
  • the three-dimensional coordinates of each point in the first structured light image can be further obtained.
  • a visualized three-dimensional point cloud model as shown in FIG. 10 that is, the first model can be obtained.
  • the second model corresponding to the fourth target image of the same three-dimensional target, and another target image (such as the fifth target image) different from the third target image and the fourth target image can be obtained.
  • the corresponding third model and so on, as well as the first-order transformation relationship corresponding to each model, will not be repeated here.
  • the first model can be further expanded.
  • the three-dimensional surface shape of the finger can be regarded as a cylinder-like body, the central axis of the cylinder can be found, and the side surface of the cylinder can be expanded according to the central axis.
  • the common coordinate axes of the multiple 3D models for the 3D object may be found first, and then unfolded according to the common coordinate axes.
  • the method 400 may further include: according to the corresponding reconstruction in the first model and the second model, the same
  • the 3D points of the reconstructed constituent units form a 3D point group, and the 3D coordinates of the 3D point group are fused to obtain the fused coordinates; the common coordinate axis is determined according to the fused coordinates of each reconstructed constituent unit.
  • merging the 3D coordinates of the 3D point group to obtain the fused coordinates and determining the common coordinate axis according to the fused coordinates of each reconstructed constituent unit includes: according to the points on the midline of each reconstructed constituent unit (such as a stripe) in the 3D point group The three-dimensional coordinates of each reconstruction unit are determined to determine the fusion coordinates of each reconstruction unit; the method of principal component analysis is used to determine the corresponding common coordinate axis according to the fusion coordinates of each reconstruction unit.
  • a deduplication operation can be performed on the points located on the midline of the same reconstruction unit to obtain the fusion coordinates of the reconstruction unit.
  • the coordinates of the points in the overlapping areas in each two models may be fused by means of averaging.
  • each model has 10 points, and the 10 points on the two models can be associated one by one according to the closest relationship, that is, 10 three-dimensional point pairs can be obtained, and each The midpoints of three-dimensional point pairs, and each midpoint is used as the fused point of the overlapping area.
  • deduplication can also be performed with the intersection center point of each overlapping area as the boundary, for example, for AA 1 and B 1 B in Figure 11a, the overlapping area B 1 can be removed with the intersection point O as the boundary O and points in the overlapping area OA 1 , get the deduplicated AB shown in Figure 11b.
  • the coordinates of the points on AB after deduplication are fusion coordinates.
  • the de-duplication operation can alleviate the problem of inaccurate determination of the common coordinate axis when the finger is placed off-center, thereby improving the quality of image processing.
  • the deduplicated combined midline (the result of the combination of three midlines) can be obtained, and the points on the combined midline
  • the coordinates are the fused coordinates corresponding to the reconstructed constituent units.
  • the three-dimensional coordinates of the points on the preset position on the combined center line after deduplication can be extracted, and the corresponding common coordinate axes can be determined by the method of principal component analysis.
  • the points at the preset positions may include, for example, the left and right endpoints of the combined center line after deduplication.
  • the three-dimensional coordinates of the endpoints at both ends of each deduplicated combined median line may be extracted at one time, and input into the principal component analysis algorithm model to obtain a common coordinate axis.
  • the point at the preset position may also include the left and right endpoints of the combined midline after deduplication and the midpoint of the combined midline, and input the three-dimensional coordinates of all three points on the combined midline into principal component analysis
  • the common coordinate axes are obtained.
  • other suitable points on the combined median line after deduplication can also be selected for analysis to obtain the common coordinate axis.
  • each three-dimensional model can be expanded along the common coordinate axis.
  • the three-dimensional Cartesian coordinate system can be re-established based on the common coordinate axis, each The new coordinates of the point are denoted as (u', v', w').
  • FIG. 12 is a schematic diagram of a method for unfolding a three-dimensional fingerprint model. It can be understood that in an ideal state, the coordinates of the w' axis of each point on the center line of each stripe in the new three-dimensional coordinate system are equal. Based on this principle, each stripe can be unfolded sequentially along the common coordinate axis.
  • a plane l can be constructed through the v' axis and the w' axis, and multiple intersection points of the plane l and each reconstruction component unit can be obtained.
  • represents a pixel coefficient, and ⁇ >1.
  • the pixel coefficient can be set according to the pixel density of the expanded image.
  • the first transformation relationship includes a first-order transformation relationship and a second-order transformation relationship.
  • formula (3) and formula (5) the corresponding relationship between each point on the first structured light image and the unfolded position point can be obtained.
  • the second first-order transformation relationship, the second second-order transformation relationship, and the second transformation relationship corresponding to the fourth target image can also be obtained.
  • Those of ordinary skill in the art can easily understand this implementation method, and will not repeat them here. .
  • the pixel points on the first illumination light image may be directly transformed according to the first transformation relationship to obtain the first data to be spliced.
  • the first illumination light image may contain several pixels, for example, 3072*2048 pixels.
  • the position coordinates of each pixel point can be substituted into the above formula (6) to obtain the unfolded position coordinates of each pixel point, and accordingly transform the pixel values of each pixel point to the unfolded position one by one, thus obtaining the Expand the image corresponding to the image to obtain the first data to be spliced.
  • the acquisition method of the second data to be spliced is similar to that of the first data to be spliced, and will not be repeated here.
  • the above technical solution can directly convert the two-dimensional illumination light image into a two-dimensional unfolded image according to the calculated transformation relationship. Image processing efficiency.
  • the first transformation relationship and the second transformation relationship are expansion transformation relationships
  • the first data to be spliced is the first expansion image
  • the second data to be spliced is the second expansion image
  • Determining the first transformation relationship of the reconstructed constituent units includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the first reconstructed image pattern to obtain the first model, and according to the positions and The first-order transformation relationship is obtained from the first model; unfold the first model along the common coordinate axis to obtain the unfolded position of each point on the first model; obtain the first-two An order transformation relationship; wherein, the first transformation relationship includes a first first-order transformation relationship and a first second-order transformation relationship.
  • determining the second transformation relationship based on the reconstructed constituent units contained in the second reconstructed image pattern includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the second reconstructed image pattern to obtain a second model, and according to the second reconstructed image The position of the reconstructed unit contained in the pattern and the second model obtain the second first-order transformation relationship; expand the second model along the common coordinate axis to obtain the expanded position of each point on the second model; according to the second model and the second model The expanded position of each point above obtains the second second-order transformation relationship; wherein, the second transformation relationship includes the second first-order transformation relationship and the second second-order transformation relationship.
  • transforming the first illumination light image according to the first transformation relationship to obtain the first data to be spliced may include: performing a first-order transformation on the first illumination light image according to the first first-order transformation relationship to obtain the first A three-dimensional model corresponding to the illumination light image; performing a second-order transformation on the three-dimensional model corresponding to the first illumination light image according to the first second-order transformation relationship to obtain a first expanded image corresponding to the first illumination light image.
  • Transforming the second illumination light image according to the second transformation relationship to obtain the second data to be spliced may include: performing a first-order transformation on the second illumination light image according to the second first-order transformation relationship to obtain the data corresponding to the second illumination light image. the three-dimensional model; according to the second second-order transformation relationship, the second-order transformation is performed on the three-dimensional model corresponding to the second illumination light image to obtain a second expanded image corresponding to the second illumination light image.
  • the first transformation relationship It may include a first first-order transformation relationship and a first second-order transformation relationship.
  • the first first-order transformation relationship and the first second-order transformation relationship can also be determined by obtaining the first model through the above-mentioned three-dimensional reconstruction and unfolding the first model.
  • the formula (6) it is not necessary to calculate the formula (6) and directly transform the pixels on the first illumination light image into the first expanded image based on the formula (6), but firstly based on the first first-order transformation relationship, that is, the formula ( 3), transform the pixel points on the first illumination light image to three-dimensional space, and then based on the first second-order transformation relationship, that is, formula (5), continue to transform the pixel points on the first illumination light image from three-dimensional space to two-dimensional space Dimensionally expanded image. That is, the pixel points on the first illumination light image can be transformed to the two-dimensional expanded image by two indirect transformations to obtain the first expanded image.
  • the acquisition method of the second expanded image is similar to that of the first expanded image, which will not be repeated herein.
  • the intermediate process of this solution can also show the user the 3D reconstruction effect corresponding to the illumination light image (such as a fingerprint image), which is convenient for the user to check and modify, and can also provide a variety of visual images to meet the various needs of the user.
  • the illumination light image such as a fingerprint image
  • the transformation corresponding to the first transformation relationship and the second transformation relationship is a three-dimensional reconstruction
  • the first data to be spliced is the first target model
  • the second data to be spliced is the second target model
  • Determining the first transformation relationship for the reconstructed constituent units contained in the image pattern includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the first reconstructed image pattern to obtain a first model; The position of the constituent unit and the first model obtain the first transformation relationship; transform the first illumination light image according to the first transformation relationship to obtain the first data to be spliced, including: performing a transformation on the first illumination light image according to the first transformation relationship order transformation to obtain the first target model; determining the second transformation relationship based on the reconstructed constituent units contained in the second reconstructed image pattern, including: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the second reconstructed image pattern, to obtain the first The second model; obtain a second
  • the unwrapping process for the 3D model may not be performed, but the illumination light image containing The pixel data is also transformed into three-dimensional space, resulting in a first object model and a second object model.
  • the first object model and the second object model can be directly spliced.
  • a similar transformation can also be performed on the second illumination light image to obtain a second target model.
  • the image mosaic method 400 may further include: detecting the first structured light image to obtain the position of the mosaic component unit in the first structured light image; Perform the operation corresponding to the first transformation relationship on the position to obtain the first mapping position; detect the second structured light image to obtain the stitching in the second structured light image The position of the component unit; the operation corresponding to the second transformation relationship is performed on the position of the spliced component unit in the second structured light image to obtain the second mapping position.
  • FIG. 7a as an example of the first structured light image
  • FIG. 7b as an example.
  • the Unet model can be used to detect and identify each circular speckle and square speckle in Fig. 7a and Fig. 7b respectively, and obtain the position coordinates of each circular speckle and square speckle. Then, the position coordinates of each speckle can be substituted into the aforementioned formula (3) or formula (6), to obtain the position coordinates of each speckle in the image plane where the three-dimensional model or expanded image is located.
  • the position of the mosaic component unit is firstly detected from the structured light image, and then only the position information is transformed to obtain the position of the corresponding mapped mosaic component unit.
  • This solution only needs to perform position calculations, and does not need to map the pixel values of the structured light image to the image plane where the unfolded image is located, so the amount of calculation is relatively small.
  • the image stitching method 400 further includes: performing an operation corresponding to the first transformation relationship on the first structured light image to obtain a first mapped structured light image; performing mapping and stitching composition unit detection on the first mapped structured light image to obtain a first mapped structured light image A mapping position; performing operations corresponding to the second transformation relationship on the second structured light image to obtain a second mapped structured light image; performing mapping and splicing composition unit detection on the second mapped structured light image to obtain a second mapping position.
  • the first structured light image and the second structured light image can also be transformed according to the obtained first transformation relationship and the second transformation relationship Carry out image conversion, map the mosaic structured light image information together to a new structured light image, and then detect the position of each mapped mosaic component unit on the mapped structured light image.
  • the coordinates of each pixel in the three structured light images in Figures 7a-7c can be substituted into formula (3) or formula (6), and three expansions including stitched structured light patterns can be obtained
  • the graph or three 3D models including spliced structured light patterns, and the pixel values of the pixel points on the structured light image are also mapped to the corresponding expanded graph (mapped structured light image) and 3D model.
  • speckle detection is performed on the unfolded image or the 3D model to obtain the position coordinates of each speckle, and then the mapping position of the constituent units of the mapping splicing can be obtained.
  • the transformation corresponding to the first transformation relationship and the second transformation relationship is three-dimensional reconstruction
  • the first data to be spliced is the first target model
  • the second data to be spliced is the second target model
  • the image stitching method 400 may also include the steps S460. Expand the overall data along the common coordinate axis to obtain an overall expanded image.
  • the first object model and the second object model can be spliced through steps S440 and S450 to obtain the pixel information including the illumination light image
  • the three-dimensional overall data and then expand the three-dimensional overall data.
  • the principle of the unfolding process in step S460 is similar to the above method of unfolding the 3D model to a 2D plane.
  • Each pixel in the overall data can be unfolded according to the second-order transformation relationship at one time to obtain the overall unfolded image.
  • step S440 includes at least one of the following: a first search operation, a second search operation operation and inspection operations; where,
  • the first search operation includes: for any specific first mapping mosaic component unit, searching the second mapping mosaic component unit closest to the specific first mapping mosaic component unit from the second mapping mosaic component unit as the specific first mapping mosaic component unit
  • the second mapping of unit matching is spliced to form a unit
  • the second search operation includes: in the first mapping and splicing composition unit, use the first current mapping and splicing composition unit as the first starting point to search for the target first mapping and splicing composition unit that meets the relative position condition with the first starting point; In the composition unit, use the second current mapping and splicing composition unit as the second starting point to search for the target second mapping and splicing composition unit that meets the relative position condition with the starting point, and use the target second mapping and splicing composition unit as the target first mapping and splicing composition unit to match The second mapping splicing composition unit; wherein, the first current mapping splicing composition unit matches the second current mapping splicing composition unit;
  • the checking operation includes: calculating the distance between the mapping positions of the first mapped mosaic unit that matches each other and the mapping position of the second mapped mosaic unit to obtain a matching error; matching the first mapped mosaic unit that matches each other and the The second mapping and splicing component unit is determined as the final matching first mapping and splicing component unit and the second mapping and splicing component unit; the multiple mapping and splicing component unit pairs included in the matching relationship are the final matching multiple mapping and splicing component unit pairs.
  • the mapping and mosaic component units corresponding to the two target images may be searched for matching with a preset distance requirement.
  • the preset distance requirement may be, for example, the shortest distance.
  • the mapping mosaic constituent unit may be, for example, a bright/dark circular speckle or a square speckle located on each stripe as shown in FIG. 6 .
  • each speckle 6 can be projected on the finger to be collected to obtain two target images, and the two target images can be input into the Unet model, and the position coordinates of each speckle can be identified, and the By substituting the coordinates into formula (3) or formula (6) in the above example, each speckle can be mapped to a corresponding unfolded image or a three-dimensional model.
  • the first search operation may include the following operations.
  • find any specific first mapped mosaic unit for example, corresponding to the bright square speckle 610 in FIG. 6 The first mapped speckle.
  • the second mapping mosaic constituent unit with the closest distance to the specific first map mosaic constituent unit is searched, for example, the distance from the first The closest bright square speckle (second mapped speckle) of the speckle is mapped, and its coordinates are recorded.
  • whether two map stitching constituent units match may be determined by judging whether the distance between them meets a first preset distance requirement, for example, whether it is smaller than a first predetermined distance threshold.
  • the first predetermined distance threshold may be any suitable value.
  • the first mapping mosaic component unit and the second mapping mosaic component unit that meet the first preset distance requirement may be considered as matching. Exemplarily, it can be By analogy, multiple pairs of matching mapping splicing constituent units are found.
  • a second lookup operation may also be performed.
  • the first lookup operation and the second lookup operation can coexist.
  • the first starting point may be the first mapping combination unit (ie, the first current mapping combination unit) in any pair of currently matched mapping combination units.
  • the first starting point it is possible to search for a target first mapping splicing component unit that satisfies a relative position condition with the first starting point.
  • the relative position condition may include that the distance from the first starting point satisfies a second preset distance requirement, or is the closest to the first starting point, or is the closest to the first starting point along a predetermined direction, and so on.
  • the second preset distance requirement may be the same as or different from the above-mentioned first preset distance requirement.
  • the second preset distance requirement may be less than or equal to a second predetermined distance threshold, and the second predetermined distance threshold may be any suitable value.
  • the speckle located at the center of any fringe may be used as the initial first starting point.
  • a second mapped speckle that matches the first starting point at this time is found from each of the second mapped speckles (a manner of searching for a matched mapped speckle may optionally be a first search operation).
  • the next first mapped speckle may continue to be found from the first mapped speckle located at the center, and the next first mapped speckle may be the closest to the first mapped speckle located at the center along a predetermined direction
  • the first mapped speckle and then performs speckle matching based on the found first mapped speckle, and finds a second mapped speckle that matches the first mapped speckle.
  • using the newly matched first mapped speckle as a new first starting point continue to find the next first mapped speckle and continue matching.
  • the above operations can be performed cyclically.
  • a third lookup operation may also be performed.
  • the first mapping position of the above-mentioned first mapping and splicing component unit and the second mapping position of the above-mentioned second mapping and splicing component unit perform global matching on each first mapping and splicing component unit and each first mapping and splicing component unit, so that the overall matching The error is the smallest; the above-mentioned overall matching error is the sum of the mapping position differences between the first mapping and splicing constituent units and the second mapping and splicing constituent units having a matching relationship.
  • there are 5 first mapping mosaic units and 4 second mapping mosaic units when the No. 1-3 first mapping mosaic units and No. 1-3 second mapping mosaic units match respectively, No.
  • multiple matching speckle pairs of markers can also be checked, for example, one can calculate The relative distance value between each matching speckle pair.
  • the average value M and variance ⁇ of multiple relative distance values may be calculated, and a threshold value of the relative distance, such as M ⁇ , may be set, and speckle pairs exceeding the range of the threshold value may be eliminated.
  • step S450 may include the following steps S451, S452, and S453: step S451, based on the matching relationship, determine the position correspondence between the first data to be spliced and the second data to be spliced; step S452, based on the position correspondence, One of the first data to be spliced and the second data to be spliced is spliced and transformed to obtain transformed data to be spliced; step S453, the untransformed untransformed data to be spliced in the first data to be spliced and the second data to be spliced is not transformed. Splicing the data and transforming the data to be spliced is spliced to obtain the overall data.
  • the following takes the first data to be spliced and the second data to be spliced as expanded images as an example for specific description.
  • a plurality of speckle pairs and position coordinates of each speckle pair on the first data to be spliced and the second data to be spliced respectively can be obtained.
  • Speckle pairs such as D1*D2, where D1(2,3), D2(2.1,3); E1*E2, where E1(3,4), D2(3.2,4); F1*F2, where F1(3 ,5), F2(3.2,5).
  • a transformation function about the first data to be spliced and the second data to be spliced can be constructed according to the coordinate relationship of the above-mentioned three speckle pairs or more speckle pairs (for example, by TPS thin-plate spline interpolation method).
  • the splicing transformation mainly makes one of the first data to be spliced and the second data to be spliced be adjusted such as scaling after transformation, so as to keep the same size or substantially the same size as the corresponding area in the other data to be spliced.
  • each pixel point of the first data to be spliced or the second data to be spliced can be transformed according to the above transformation function to obtain transformed data to be spliced.
  • the first data to be spliced is spliced and transformed.
  • the converted data to be spliced and the second data to be spliced are basically the same in size at the area corresponding to the position. On this basis, the splicing and fusion of the two can quickly obtain the overall data with less error.
  • the second data to be spliced may also first be spliced and transformed, the principle of which is the same as that of the above solution, and the user may set it as required.
  • the splicing example in which the first data to be spliced or the second data to be spliced is a three-dimensional object model is similar to the above method, which can be understood by those skilled in the art and will not be repeated here.
  • the first transformation relationship and the second transformation relationship are expansion transformation relations
  • the first data to be spliced is the first expansion image
  • the second data to be spliced is the second expansion image
  • the untransformed data to be spliced is the untransformed expansion image
  • splicing the untransformed untransformed data to be spliced and the converted data to be spliced in the first data to be spliced and the second data to be spliced to obtain the overall data including: based on untransformed Determining pixel values in the common image area of the overall data based on pixel values in the common image area in the expanded image and/or transforming pixel values in the common image area in the expanded image; based on the pixel values in the common image area in the first expanded image Based on the pixel values in the area, determine the pixel values of the overall data located in the first non-common image area; based on the pixel
  • the common image area is the area where the matching mapping and stitching constituent units of the first mapping stitching composition unit and the second mapping stitching composition unit are located, and the first non-common image region is the first mapping stitching composition unit not matched with the second mapping stitching composition unit.
  • the first expanded image may be transformed first to obtain a transformed expanded image 1310 .
  • the pixel value of this area can be equal to the corresponding pixel value of the first expanded image; and the second non-public image area 1340 is the second A part of the expanded image 1320 does not include any information of the first expanded image, so the pixel value of this area may be equal to the corresponding pixel value of the second expanded image.
  • the common image area 1350 can be the overlapping area of the transformed expanded image 1310 and the second expanded image 1320, so the pixel value of this area can adopt the pixel value of either of the two, or can be determined based on both, for example, average way.
  • the above-mentioned method for determining the pixel value of the overall data is simple and easy to operate, and has a small amount of calculation, which can be realized by directly loading corresponding logic operations in the image processing model.
  • the overall data includes a common area, a first non-common area and a second non-common area
  • the common area is the area where the matching mapping and mosaic constituent units of the first mapping mosaic constituent unit and the second mapping mosaic constituent unit are located
  • the first non-common area is the area where the first mapping and splicing constituent units that are not matched with the second mapping splicing constituent unit are located
  • the second non-common area is the area where the second mapping splicing constituent units that are not matched with the first mapping splicing constituent unit are located region
  • step S460 includes: unfolding the anchor points in the overall data along the common coordinate axis to obtain the pixel coordinates of each anchor point in the overall unfolded image;
  • the pixel value of the pixel point corresponding to the point and/or the pixel value of the pixel point corresponding to the anchor point in the second illumination light image determine the pixel value of the anchor point in the overall expanded image; for the anchor point located in the first non-public area point, according to the pixel
  • An anchor point is any point on the overall data.
  • the overall data is an overall object model obtained by splicing the first object model and the second object model.
  • the overall target model is a three-dimensional model, which may be composed of point clouds. Any point on the point cloud can be regarded as an anchor point. For example and not limitation, the coordinates of each anchor point in the three-dimensional space are determined, but the pixel value may be unknown.
  • step S460 during the process of unfolding the overall data along the common axis, the unfolding is performed in units of anchor points, so that each anchor Points are in one-to-one correspondence with each pixel in the illumination light image, thus, the pixel value of each pixel in the illumination light image can be One is mapped to the overall unfolded image.
  • the common area and the non-common area can be divided to perform pixel value mapping.
  • the pixel value of the anchor point can be determined directly by using the corresponding pixel value of the illumination light image to which the non-public area belongs.
  • the anchor points in the common area you can select any illumination light image or combine the corresponding pixel values on two illumination light images to determine the pixel value of the anchor point, and then obtain the pixel values of each anchor point in the overall expanded image. Those skilled in the art can understand this method, and will not repeat it here.
  • An embodiment of the present application provides a handprint collection system.
  • the system can use the blue light source and the green light source together to supplement light (or illuminate) for the image acquisition device when the image acquisition device collects the handprint image.
  • Supplementing light with blue light and supplementing light with green light have their own advantages and disadvantages for handprint collection under different skin conditions.
  • using these two light sources to supplement light can expand the applicability of the handprint collection system to different skin conditions, thereby It is conducive to taking into account the collection needs of various groups of people.
  • a handprint collection system is provided. You can refer back to FIG. 2 to understand the handprint collection system in this embodiment.
  • the handprint collection system 200 may include one or more image collection devices, an illumination system and a processing device 240 .
  • the lighting system includes one or more blue light sources and one or more green light sources, and each light source is used to emit light toward the handprint collection area 300 .
  • the blue light sources included in the lighting system form a blue light source group, and the green light sources included in the lighting system form a green light source group.
  • the handprint collection area 300 is used to place part or all of the user's hand, such as the fingers 400 .
  • One or more image acquisition devices are used to collect images of the handprint collection area 300 while the illumination system emits light to the handprint collection area 300 to obtain a handprint image.
  • the processing device 240 is used for processing the handprint images collected by one or more image collection devices.
  • the light source included in the lighting system described herein may be any type of light source not used for encoding, including but not limited to one or more of the following: point light source, line light source, surface light source, etc.
  • the point light source may be, for example, a light strip composed of multiple point light sources.
  • at least part of the light sources in the lighting system and one or more image capture devices are located on the same side of the handprint collection area, for example, below the handprint collection area.
  • the light source located on the same side of the handprint collection area as one or more image collection devices may be called the main light source.
  • the lighting system may also include an auxiliary light source located near the handprint collection area, so as to illuminate the handprint from a position closer to the handprint.
  • the auxiliary light source is closer to the center of the handprint collection area.
  • the main light source can be arranged below the handprint collection area, and the center of gravity of the auxiliary light source (which can be understood as the respective center of gravity of each light source in the auxiliary light source) can be located or approximately on the same level as the center of the handprint collection area.
  • At least part of the light sources in the auxiliary light sources can be located at predetermined positions on both sides of the handprint collection area (the first side of the two sides corresponds to the first predetermined position, and the second side corresponds to the second predetermined position), and/or at least part of the light sources It may be located at a predetermined position near the fingertip (which may be referred to as a third predetermined position).
  • a light source located at a predetermined position on either side of the handprint collection area can emit light toward the side of the handprint collection area.
  • a light source located near the fingertip location may emit light toward the fingertip location.
  • the fingertip position refers to the expected position where the user's fingertip should fall when the user places the finger or palm on the handprint collection area.
  • the position of the auxiliary light source is The goal is to illuminate the hand without affecting the imaging of the handprint acquisition system (for example, the auxiliary light source will not enter the imaging range of the image acquisition device).
  • the auxiliary light sources may also contain one or more blue light sources and one or more green light sources.
  • the handprint collection system of the embodiment of the present application can be non-contact, that is, the handprint collection area is a space, and the user can collect part or all of the hand in the handprint collection area, and there is no contact with the handprint collection area.
  • the entity touched by the handprint can also be contact type, for example, a handprint collection system that realizes handprint collection by touching a finger with a screen running on a smart device.
  • FIG. 2 shows three image acquisition devices 212 , 214 and 216 . Furthermore, Fig. 2 also shows a lighting system comprising at least one blue light source and at least one green light source.
  • the lighting system shown in FIG. 2 may include three light source sets corresponding to the three image acquisition devices, each light source set includes at least one blue light source and at least one green light source. Illumination is performed when the corresponding image acquisition device performs image acquisition.
  • the light source set may be further divided into light source sub-sets.
  • the light sources in the light source set are distributed on both sides of the image acquisition device, the left light source is used as a light source sub-set, and the right light source is used as a light source sub-set.
  • a first set of light sources includes light source subsets 222 and 222'
  • a second light source set includes light source subsets 224 and 224'
  • a third light source set includes light source subsets 226 and 226'.
  • the structure of the handprint collection system 200 shown in FIG. 2 is only an example and not a limitation to the application, and the handprint collection system 200 according to the embodiment of the application is not limited to the situation shown in FIG. 2 .
  • the number of image capture devices may be more or less than three.
  • the number of light source subsets included in the light source set around each image acquisition device shown in FIG. 2 and the positions of the light source subsets can also be changed as required.
  • any set of light sources may also include a single subset of light sources (eg, the first set of light sources may only include subset 222 of light sources) or more than two subsets of light sources.
  • the lighting system includes at least one light source pair, and each light source pair includes a blue light source and a green light source.
  • each light source pair includes a blue light source and a green light source.
  • This example is a preferable way to arrange the light sources. This way makes each paired blue light source and green light source as close as possible, so that the acquisition conditions of the blue channel and the green channel of the image are as close as possible to the same. It helps to improve the accuracy of subsequent handprint images obtained based on the two channels.
  • each light source set may include at least one light source pair, and each light source pair includes a blue light source and a green light source.
  • each light source pair includes a blue light source and a green light source.
  • any subset of light sources in FIG. 2 may include a pair of light sources.
  • the lighting system includes at least one blue light source and at least one green light source.
  • the number of light sources in the lighting system can be set to any suitable number as required. If classified according to color, the light sources included in the lighting system can be divided into a blue light source group and a green light source group.
  • the number of blue light sources included in the blue light source group and the number of green light sources included in the green light source group can be arbitrary, and they can be consistent or inconsistent.
  • the number of blue light sources included in the blue light source group is consistent with the number of green light sources included in the green light source group.
  • the handprint collection system 200 may further include a casing, and at least part of the light sources in the one or more image collection devices and the lighting system may be disposed inside the casing.
  • a window may be provided on the casing, and a light-transmitting plate may be provided on the window.
  • the handprint collection area can be located above the light-transmitting plate, and the user puts a finger in the handprint collection area, and the lighting system below can shine light on the finger through the light-transmitting plate.
  • the image acquisition device below the handprint acquisition area can also acquire fingerprint images of fingers through the light-transmitting plate.
  • the embodiment in which the handprint collection area is located above the light-transmitting plate is only an example, the light-transmitting plate is optional, and the position of the handprint collection area relative to at least part of the light sources in the image collection device and lighting system can also be changed, as long as The handprint collection area only needs to be located within the imaging range of the image collection device and within the illumination range of the lighting system.
  • the imaging range of any image acquisition device refers to the range covered by a cone with the optical center of the lens of the image acquisition device as the vertex and the viewing angle of the image acquisition device as the cone angle.
  • the light-transmitting plate can be installed in the casing at a set height, so as to prevent the virtual image formed by the illumination source through the light-transmitting plate from entering the imaging range of the image acquisition device.
  • the lighting system may also include a light source located near the handprint collection area, that is to say, the lighting system may also include an auxiliary light source located above the light-transmitting plate, so as to illuminate the hand at a position closer to the hand.
  • the auxiliary light source can also include both blue light source and green light source.
  • the auxiliary light source can include one or more of point light source, line light source, and surface light source, and can be located near the hand in the handprint collection area.
  • the auxiliary light source includes light strips composed of point light sources on both sides of the handprint collection area. and a point light near the fingertip location.
  • the image collection device can collect the user's fingerprint or palmprint image.
  • the handprint collection system can be used to collect only one type of hand texture, or it can be used to collect multiple types of hand textures.
  • the handprint collection system can only be used to collect fingerprints in the fingertip area (including fingerprints on the front and side of the finger pad), and can also be used to collect fingerprints in the fingertip area and palm prints at the same time, and can also be used to collect Fingerprints in the fingertip area, lines on the knuckles of the fingers except the knuckles where the fingertips are located, and palm prints on the palm.
  • Multiple types of hand textures can share the same handprint collection area, or use different handprint collection areas, and the handprint collection areas of different types of hand textures can partially or completely overlap.
  • the number of image capture devices included in the handprint capture system 200 can be set to any appropriate number as required.
  • the number of image acquisition devices may be 1, 2, 3 or 6 and so on.
  • the handprint collection system 200 may include a single image collection device.
  • the image of the handprint at a single angle can be collected by an image collection device.
  • the collected handprint is a fingerprint
  • the implementation scheme of collecting the handprint image at a single angle is more suitable for simulating the fingerprint image obtained by flat printing.
  • the handprint collection system 200 may include multiple image collection devices.
  • multiple image capture devices may be used to capture handprint images from multiple angles. That is, the optical axes of any two image capture devices among the plurality of image capture devices may form a preset angle with each other, and the preset angle is not equal to 0, so as to collect handprint images at different angles.
  • the angle/height of multiple image acquisition devices can be adjusted so that not only the frontal fingerprint image of the finger pad can be collected, but also the side fingerprint image of the finger pad can be collected, and the handprint image from multiple angles can be collected
  • the implementation scheme of is more suitable for simulating the fingerprint image obtained by rolling stamping.
  • multiple image acquisition devices may be arranged such that their optical axes are approximately perpendicular to the handprint area to be acquired.
  • the optical axis of the image acquisition device used to collect fingerprint images on the front of the finger pad is perpendicular to the front area of the finger pad
  • the optical axis of the image acquisition device used to collect fingerprint images on the side of the finger pad is perpendicular to the side area of the finger pad.
  • the plurality of image acquisition devices may include an image acquisition device for acquiring a first type of hand texture and an image acquisition device for acquiring a second type of hand texture.
  • the field of view, focal length, placement angle, and position of the image acquisition device for acquiring the first type of hand texture and the image acquisition device for acquiring the second type of hand texture may be different. It can be understood that, compared with the image collection device used for collecting fingerprints, the image collection device for collecting palmprints has a larger field of view.
  • the field of view of an image acquisition device used to collect fingerprints or palmprints may refer to the field of view of a single image acquisition device, or may refer to the field of view of a combination of multiple image acquisition devices. For the same image acquisition device, it can be used to acquire both the first type of hand texture and the second type of hand texture.
  • the image acquisition device may also include a preview image acquisition device dedicated to preview, and the preview image acquisition device may have a larger field of view than the image acquisition device used for shooting, for example, the field of view of a single preview image acquisition device covers the entire palm
  • the common field of view of multiple image acquisition devices used for shooting covers the entire palmprint.
  • Multiple image acquisition devices can capture multiple palmprint images of target parts such as fingers or palms of the user in one-to-one correspondence, and multiple palmprint images can correspond to different parts of the target site.
  • the handprint image collected by the first image acquisition device can mainly correspond to the left part of the finger or palm (i.e.
  • the palmprint image captured by the image capture device may mainly correspond to the middle part of the finger or palm (i.e. the frontal area), while the palmprint image captured by the third image capture device may mainly correspond to the right side part of the finger or palm (i.e. side area, specifically the side area on the right side).
  • the optical axis of the image acquisition device is set towards the side where the handprint of the target part is located, for example, the pad of the finger or the palm of the palm.
  • the optical axis of the image collection device can be set vertically upward or obliquely upward so as to collect the fingerprint of the user's finger.
  • the heights of the centers of gravity of the multiple image capture devices may be set to be consistent or inconsistent.
  • the heights of the centers of gravity of multiple image capturing devices may be consistent.
  • the centers of gravity of multiple image capturing devices may lie on a circular arc.
  • the processing device 240 is communicatively connected to one or more image acquisition devices, which connection may be wired or wireless.
  • the processing device 240 may receive handprint images collected by one or more image collection devices, and process the handprint images.
  • the processing device 240 can perform any necessary processing operations on the handprint image, including but not limited to one or more of the following: obtaining depth information from the handprint image; performing splicing processing on the handprint images collected by multiple image acquisition devices, etc. wait.
  • the processing device 240 can also be communicatively connected with the lighting system, and the connection can be wired or wireless.
  • the processing device 240 may control each light source in the lighting system to emit light according to various lighting schemes described herein.
  • other control devices different from the processing device 240 may also be used to control each light source in the lighting system to emit light.
  • the above-mentioned control device may be a control device in the handprint collection system, or an external control device independent of the handprint collection system.
  • Different people's skin has different reflection, absorption and transmittance of light
  • the same person's hand has different reflection, absorption and transmittance of light of different wavelengths.
  • the reflection, absorption and transmission of light are also different when the fingers or palms of the same person are dry, wet or sweaty.
  • both blue light and green light have their own advantages. For example, the contrast of the ridges and valleys of fingerprints or palm prints will be improved under the blue light fill light, which is suitable for people with light fingerprints; while the overdry or wet fingers can be photographed more clearly under the green light fill light.
  • the inventor chooses to supplement the light with blue and green light sources at the same time, so that the two lights can complement each other.
  • a clear image can be collected under one color of light, high-quality images can be obtained relatively easily after processing.
  • Handprint image Therefore, this scheme of supplementing light with blue and green light sources when collecting handprints can expand the applicability of the handprint collection system to different skin conditions, thereby helping to meet the collection needs of various groups of people.
  • the wavelength discrimination of blue and green light sources can be set as large as possible, that is, blue light sources with shorter wavelengths (purple) and green light sources with longer wavelengths (yellow) can be selected as much as possible.
  • the blue light described herein may be light with a wavelength of less than 430nm, such as light at 405nm, and the green light may be light with a wavelength greater than 540nm, such as 560nm light.
  • the lighting system may include one or more light source sets, each light source set includes at least one blue light source and at least one green light source, one or more light source sets are connected with one or more image acquisition devices one by one correspond.
  • the light source set corresponding to any image acquisition device is responsible for supplementing light for the image acquisition device, and each time the image acquisition device captures an image, only the light source set corresponding to it can emit light, and other light source sets do not emit light .
  • Assigning a corresponding supplementary light source to each image acquisition device facilitates the control and management of the image acquisition device and the light source, and facilitates the avoidance of mutual interference of different light sources when acquiring images.
  • the direction of the light emitted by the light source in the light source set may be the same as the handprint collected when the light source in the light source set is turned on.
  • the area is roughly vertical. In this way, the light is approximately perpendicular to the handprint area, which is beneficial to ensure that the valleys and ridges of the handprint can be more accurately presented in the handprint image.
  • the direction in which the light sources in each light source set emit light is roughly parallel to the direction of the optical axis of the corresponding collection device, which can be achieved by This is achieved by arranging the light sources in the light source set and their corresponding image acquisition devices close enough.
  • optical axes 2122 , 2142 , and 2162 of image capture devices 212 , 214 , and 216 , respectively, are shown. It can be seen from FIG. 2 that the direction of the light emitted by the light sources in each light source set is parallel to the direction of the optical axis of the corresponding collection device. It should be noted that the parallel described herein does not necessarily require parallel objects to be absolutely parallel to each other, but may be approximately parallel.
  • parallel may mean that the angle between a parallel object (in this embodiment, the direction of the light emitted by the light source and the direction of the optical axis of the corresponding image acquisition device or any two light rays emitted by the same parallel light source) is smaller than a preset threshold .
  • the threshold can be set as small as possible, for example, set to 10° or the like.
  • the lighting system includes at least one light source pair, each light source pair includes a blue light source and a green light source, and the distance between the blue light source and the green light source in the same light source pair is smaller than a preset distance threshold.
  • the blue light source and the green light source in the same light source pair can be turned on or off at the same time, which is more conducive to ensuring that the imaging conditions of the blue channel and the green channel are closer to the same.
  • the same imaging conditions of the blue channel and the green channel help to improve the fusion effect of the blue channel image and the green channel image in the subsequent image fusion.
  • the light sources in the lighting system may all be unpaired, or the lighting system may include light source pairs and unpaired light sources.
  • Each light source set may include at least one light source pair.
  • Each pair of light sources may include a blue light source and a green light source.
  • the blue light source and the green light source in each light source pair can be as close as possible to further ensure that the imaging conditions of the blue channel and the green channel are close to the same.
  • any subset of light sources eg, 222, 222', 224, 224', 226, 226'
  • the preset distance threshold may be set to any appropriate value as required, and this application is not limited thereto.
  • the illumination system is located outside the imaging range of one or more image acquisition devices.
  • the illumination system includes one or more sets of light sources
  • the set of light sources corresponding to any image capture device may be located outside the imaging range of that image capture device.
  • the light source can be arranged at any suitable position around the image acquisition device, as long as it does not hinder the imaging of the image acquisition device.
  • the set of light sources corresponding to any image capture device is set outside the imaging range of the image capture device, which can effectively prevent the light source from hindering the image capture device from collecting handprint images.
  • one or more image acquisition devices include a first image acquisition device, a second image acquisition device and a third image acquisition device, wherein the optical axis of the first image acquisition device is aligned with the direction of the handprint acquisition area Or the planes of multiple image acquisition devices are vertical, the optical axis of the second image acquisition device forms a first preset angle with the optical axis of the first image acquisition device, the optical axis of the third image acquisition device and the first image acquisition device The optical axis forms a second preset angle.
  • the plane of the handprint collection area facing one or more image collection devices may be, for example, a plane parallel to the plane where the above-mentioned shading plate is located.
  • the plane of the palmprint collection area facing one or more image capture devices can be parallel to the ground plane.
  • the optical axis of the first image acquisition device may be vertically upward
  • the optical axes of the second image acquisition device and the third image acquisition device may be obliquely upward.
  • any one of the first preset included angle and the second preset included angle may be set to any appropriate value as required, which is not limited in this application.
  • the first preset included angle and the second preset included angle may be the same or different.
  • the first preset included angle may be in the range of 0 to 45 degrees
  • the second preset included angle may also be in the range of 0 to 45 degrees.
  • the optical axes of the first image acquisition device, the second image acquisition device and the third image acquisition device are located in the same plane.
  • the arrangement of the optical axes of the three image acquisition devices in the same plane and arranged non-parallel to each other is simple, and also helps to more conveniently acquire images of multiple different parts of target parts such as fingers or palms.
  • the handprint collection system may further include: a structured light projector for emitting structured light to the handprint collection area; Handprint images are processed: the grayscale images of the channels corresponding to the structured light are extracted from the handprint images collected by one or more image acquisition devices, and the depth information of the handprints is determined based on the extracted structured light images.
  • FIG. 14 shows a handprint collection system 200 and related fingers according to one embodiment of the present application 400 and a schematic diagram of the handprint collection area 300.
  • FIG. 14 shows that the handprint collection system 200 also includes a structured light projector 100 .
  • FIG. 2 can be understood as a front view of the handprint collection system 200
  • FIG. 14 can be understood as a left side view of the handprint collection system 200 .
  • FIG. 14 only shows a single image acquisition device 214 as an example for description, and those skilled in the art can understand the positions of other image acquisition devices in FIG. 14 .
  • FIG. 14 shows that the structured light projector 100 is located on the side closer to the root of the finger with a specific cross-section 302 as the boundary. The opposite part) is farther horizontally from the root of the finger, wherein the specific cross-section 302 is the cross-section of the cross-section of the finger 400 that overlaps with the central axis of the handprint collection area 300 .
  • the arrangement of the above-mentioned structured light projector 100 is only an example, and it can also be located on the side farther away from the root of the finger bounded by the specific cross section 302, that is, the head of the structured light projector 100 can also be farther away from the root of the finger than the tail.
  • the horizontal distance is closer.
  • the structured light projector 100 can be arranged at any suitable position as required, as long as it does not hinder the imaging of the image acquisition device and can illuminate the handprint acquisition area.
  • the height of the center of gravity of the structured light projector 100 and the height of the integrated center of gravity or the lowest center of gravity of one or more image acquisition devices may be set to be substantially the same. In this way, it is convenient to arrange the components in the handprint collection system more compactly, and it is beneficial to reduce the volume of the device.
  • the height of the center of gravity of the structured light projector 100 can be lower than the height of the integrated center of gravity or the lowest center of gravity of one or more image acquisition devices, so that the light projection of the structured light projector 100 can be ensured.
  • the area is large enough to cover the acquisition area corresponding to the target site as effectively as possible.
  • the target site In the case of collecting fingerprints, the target site is relatively small, and the height of the center of gravity of the structured light projector 100 can be set to be substantially the same as the height of the integrated center of gravity or the lowest center of gravity of one or more image acquisition devices. In the case of collecting palm prints, the target site is relatively large, and the height of the center of gravity of the structured light projector 100 may be lower than the height of the combined center of gravity or the lowest center of gravity of one or more image capture devices.
  • the grayscale image of the channel corresponding to the structured light extracted from any handprint image is the structured light image corresponding to the handprint image.
  • the structured light image includes depth information corresponding to each pixel in the original handprint image. Therefore, through the structured light projector, the depth information of the palm lines of target parts such as fingers or palms can be further obtained. Obtain the depth information of the handprint, which is convenient for subsequent simulation of the handprint image. At present, many occasions may need to be applied to fingerprint images obtained by stamping. Therefore, the scheme of adding structured light can correctly expand the acquired handprint image into a two-dimensional image, so that it can be compared with the fingerprint image obtained by stamping in the subsequent comparison process. Compatible with fingerprint images, expanding the application scenarios of the handprint collection system.
  • the structured light can try to choose a color that is quite different from the blue and green, for example Such as red and so on.
  • the structured light emitted by the structured light projector may be red light with a wavelength of 660nm.
  • Illumination light and structured light respectively choose blue, green, and red three colors, which is conducive to the subsequent convenient extraction of images of different channels.
  • red light may not be used for structured light, and image processing can be used to remove the influence of structured light in the blue-green channel.
  • the direction of the light emitted by the structured light projector and the plane where the optical axes of the one or more image acquisition devices are located form a third preset angle.
  • the third preset angle can be set to any suitable value as required, and this application is not limited thereto.
  • the third preset angle may be different from the above-mentioned first preset angle and the second preset angle, or may be the same as the first preset angle and/or the second preset angle.
  • the optical axes of all the image acquisition devices in FIG. 14 are in the same plane, and this plane appears as a straight line parallel to the optical axis 2142 in the left view shown in FIG. 14 . It can be seen from FIG. 14 that the direction of light emitted by the structured light projector 100 forms a certain angle with the plane where the optical axis of the image acquisition device is located.
  • This method of arranging the structured light projector on the side of the image acquisition device facilitates the structured light projector to avoid the imaging range of the image acquisition device, so as to avoid hindering the imaging of the image acquisition device.
  • this method of arranging the structured light projector on the side of the image acquisition device also facilitates the arrangement of the structured light projector at a height similar to that of the image acquisition device, which is conducive to achieving a more compact layout inside the handprint acquisition system.
  • the handprint collection system 200 may also include a casing, one or more image collection devices and lighting systems are arranged in the casing, a window is set on the casing facing the handprint collection area, and a window is set on the window. There are transparent panels.
  • the above describes the embodiment in which the image acquisition device and the lighting system are arranged in the casing, and a light-transmitting plate is arranged on the casing, and details will not be repeated here.
  • the light-transmitting plate can be realized by using any suitable material, including but not limited to glass and the like.
  • the glass may be glass with an anti-reflection coating.
  • the light-transmitting board can play the role of waterproof and dustproof, help protect the main structure of the handprint collection system, and prolong the service life of the equipment.
  • the handprint collection system 200 may also include a limiting part, which defines a handprint collection area, and an inlet is provided at the proximal end of the limiting part to allow part or all of the user's hand to pass through the inlet. Enter the handprint collection area; the far end of the limiting component is provided with a stopper, and the stopper is located at a predetermined position in the handprint collection area along the extending direction, so as to limit part or all of the user's hand from extending into the handprint collection area location within.
  • the proximal end of the limiter can be understood as the end of the limiter that is closer to the front of the user when the user normally uses the handprint collection system, and the far end of the limiter can be understood as the limiter when the user normally uses the handprint collection system on the end that is farther from the front of the user.
  • Fig. 15 shows a schematic diagram of a part of the handprint collection system according to one embodiment of the present application.
  • the limiting component 250 is shown, and the inlet 252 and the stop portion 254 of the limiting component 250 are shown.
  • the area between the entrance 252 and the stopper 254 of the limiting component 250 can be set as the handprint collection area 300 .
  • Target parts such as user's finger or palm can reach into hand through inlet 252 In the pattern collection area 300.
  • the stopper 254 can limit the limit position that the user's finger or palm can reach, and prevent the user from extending the target part such as finger or palm too far.
  • the limit part 250 it is convenient to guide the user to place the target part such as the finger or the palm in the correct position, so as to ensure that the part on the target part that is more suitable for detection (for example, the first knuckle of the finger closest to the fingertip) can Fall into the handprint collection area.
  • the handprint collection system 200 may further include a light-shielding component, and the light-shielding component and at least part of the light sources in the lighting system are respectively located on opposite sides of the handprint collection area.
  • FIG. 15 it shows a shading member 260 , which is located on opposite sides of the handprint collection area 300 with at least part of the light sources in the lighting system.
  • the shading component is located above the handprint collection area, and at least part of the light sources in the lighting system are located below the handprint collection area.
  • the shading component 260 can prevent the light emitted by the light source from reaching the outside to affect the user experience, and at the same time prevent the stray light from the outside from shining into the handprint collection area to interfere with the collection of the handprints.
  • the handprint collection area may be at least a part of the area between the light-shielding component and the light-transmitting plate.
  • FIG. 16 shows a schematic diagram of a partial structure of a handprint collection system 200 and related fingers and handprint collection areas according to an embodiment of the present application. It should be noted that FIG. 16 is a schematic diagram and does not represent the actual shape or size of each component. Referring to FIG. 16 , it shows a light-shielding member 260 and a light-transmitting plate 270 , and a handprint collection area 300 is disposed therebetween. In addition, FIG. 16 also shows two groups of auxiliary light sources 2281 and 2282 on both sides of the handprint collection area 300 and an auxiliary light source 2283 near the fingertip position of the handprint collection area 300 .
  • auxiliary light sources 2281 and 2282 on both sides of the handprint collection area 300 are respectively a light strip, and the auxiliary light source 2283 near the fingertip position of the handprint collection area 300 is a point light source, but this is only is an example, the auxiliary light source may have other implementation forms.
  • the finger 400 is represented by a dotted line for convenience of distinction.
  • the handprint collection system may also include a light-shielding part 260, and the light-shielding part 260 and the lighting system are respectively located on opposite sides of the handprint collection area.
  • the limiting component 250 also includes a connecting portion connecting the inlet and the stop portion, and the upper surface of the connecting portion is attached to the lower surface of the light-shielding portion.
  • connection portion 256 connecting the inlet 252 and the stop portion 254 in the limiting member 250 is shown.
  • the upper surface of the connecting part 256 can be attached to the lower surface of the light shielding part 260, which can better prevent the light emitted by the light source from escaping to the outside and better prevent outside stray light from irradiating the handprint collection area.
  • a handprint collection method implemented by the above-mentioned handprint collection system 200 includes: when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme, one or more image collection devices are used to collect images of the handprint collection area to obtain one or more The handprint images collected by each image acquisition device, so that in the handprint images collected by each image acquisition device, at least one handprint image is in the case where there is a blue light source with a luminous intensity other than 0 in the blue light source group Collected, at least one handprint For example, it is collected when there is a green light source with a luminous intensity other than 0 in the green light source group; wherein, the blue light source group includes one or more blue light sources, the green light source group includes one or more green light sources, and the blue light source group includes one or more green light sources.
  • the luminous scheme is used to characterize the luminous intensity
  • the lighting system can be divided into two groups according to the colors of the light sources, namely, the blue light source group and the green light source group.
  • the blue light source group may include all blue light sources in the lighting system
  • the green light source group may include all green light sources in the lighting system.
  • the handprint acquisition system For each image acquisition device in the handprint acquisition system, it is only necessary to ensure that at least one of the handprint images collected by it is the case where there is a blue light source with a luminous intensity other than 0 in the blue light source group At least one handprint image is collected when there is a green light source with a luminous intensity not equal to 0 in the green light source group.
  • the time sequence of image collection by all image acquisition devices and the light emission time sequence of all light sources in the lighting system are the same. Can be set as required.
  • the "at least one handprint image” collected when the luminous intensity of the blue light source is not 0 and the “at least one handprint image” collected when the luminous intensity of the green light source is not 0 there may be one or multiple handprint images are the same.
  • any image acquisition device X it can acquire two handprint images in total. The first one is when the luminous intensity of at least one blue light source is not 0 and the luminous intensity of all green light sources is 0. The second one is collected when the luminous intensity of at least one green light source is not 0 and the luminous intensity of all blue light sources is 0.
  • the image acquisition device X may only acquire one handprint image, and the handprint image may be acquired when the luminous intensity of at least one blue light source is not zero and the luminous intensity of at least one green light source is not zero.
  • the image acquisition device X can also acquire three handprint images, the first one is acquired when the luminous intensity of at least one blue light source is not 0 and the luminous intensity of all green light sources is 0, the second The second is collected when the luminous intensity of at least one blue light source is not 0 and the luminous intensity of at least one green light source is not 0, and the third is collected when the luminous intensity of at least one green light source is not 0 and all Collected when the luminous intensity of the blue light source is all 0.
  • the above collection schemes of the image collection device X are all feasible, and of course, other numbers of images can be collected and other lighting methods can be used, which will not be repeated here.
  • the acquisition corresponds to a blue light-emitting scheme and a green light-emitting scheme
  • the handprint image obtained in this acquisition is also corresponding to the blue light-emitting scheme.
  • Corresponding to this green lighting scheme Each blue light emission scheme is used to characterize the luminous intensity of each light source in the blue light source group, and each green light emission scheme is used to characterize the luminous intensity of each light source in the green light source group.
  • the relationship between the green light source group and the blue light source group is similar, so this article will not go into details.
  • the blue light source group includes three blue light sources B1, B2, and B3, and the luminous intensity of each light source is only divided into two levels, one is when the intensity reaches the maximum, and the other is when the intensity is 0, then theoretically there are (ie There are up to) eight blue glow schemes.
  • the above eight blue light-emitting schemes can be expressed as 000, 001, 010, 100, 011, 101, 110 , 111.
  • some or all of the eight blue light-emitting schemes can be used to sequentially emit light, and a corresponding handprint image can be collected under each light-emitting scheme.
  • a total of six blue light-emitting schemes are used to emit light in any order.
  • the blue light source group emits light according to any blue light-emitting scheme each time, at least part of the images in one or more image acquisition devices
  • the collecting device collects the corresponding handprint image.
  • the same blue light-emitting scheme can appear one or more times.
  • any blue light-emitting scheme P among the above-mentioned six blue light-emitting schemes appears three times, and other blue light-emitting schemes appear once.
  • eight images, and the emission schemes corresponding to the three images are blue emission scheme P.
  • the luminous intensity of at least one light source is not 0 during each acquisition.
  • the luminous intensity of each light source is only divided into two levels for description above, it can be understood that the luminous intensity of each light source can have more levels, and the number of luminous intensity levels of any two different light sources They can be the same or different, and these can be set arbitrarily.
  • the maximum possible number of blue/green luminous schemes can also be increased, but the blue/green luminous schemes finally used to supplement light for any image acquisition device can be It can be selected as required, and it can include all possible blue/green light emitting schemes, or only a part of all possible blue/green light emitting schemes.
  • the handprint collection method of the embodiment of the present application for each image collection device, at least one of the collected handprint images is collected when the luminous intensity of the blue light source is not 0, and at least one is Collected when the luminous intensity of the green light source is not 0. Therefore, in this method, when each image capture device captures an image, the blue light source and the green light source can be used together to supplement light for the image capture device. This can expand the applicability of the handprint collection method to different skin states, thereby helping to take into account the collection needs of various groups of people.
  • the above collection includes multiple times, and at least two of the multiple collections correspond to different blue light emission schemes, and/or, at least two of the multiple collections correspond to different green light emission schemes.
  • the operation of acquiring handprint images can be performed multiple times.
  • the image acquisition devices performing any two different acquisition operations may be the same or different.
  • the lighting scheme corresponding to at least two of the above-mentioned multiple collections can be changed.
  • the changed lighting scheme can be a blue lighting scheme, a green lighting scheme, or a blue lighting scheme. Both the scheme and the green glow scheme change. in blue
  • the at least two acquisitions corresponding to the change of the blue light-emitting scheme and the at least two acquisitions corresponding to the change of the green light-emitting scheme may be completely different, completely the same, or only Part of the collection is the same.
  • the at least two collections with different blue light emitting schemes may be continuous at least two collections, or may be partially or completely discontinuous at least two collections.
  • the at least two acquisitions with different green light emission schemes may be at least two acquisitions that are continuous, or at least two acquisitions that are partly or completely discontinuous.
  • the collection situation faced by the handprint collection system is more complex and will change at any time. For example, different people have different skin conditions, and different people have different positions to place the target parts. Also subject to change. According to different acquisition conditions, the luminescence conditions required to obtain a better-quality handprint image may also be different. If the luminescence scheme is stable during the process of collecting handprint images by the image acquisition device, and only single-quality handprint images can be collected, if the current luminescence scheme is not suitable for the current user's collection situation, it may cause the final obtained handprint image The texture collection effect is not ideal.
  • the light-emitting scheme is changed to a certain extent, so that the quality of the collected handprint image can also be changed to a certain extent, which can make up for the limitation of a single handprint image to a certain extent. , which helps to obtain a better handprint collection effect.
  • the above collection includes multiple times, and the blue light emission schemes corresponding to every two collections in the multiple collections are different, and/or, the green light emission schemes corresponding to every two collections in the multiple collections are different.
  • the blue light emission scheme or the green light emission scheme corresponding to at least two acquisitions are the same.
  • the blue light-emitting scheme may be kept constant and only the green light-emitting scheme may be changed, or the green light-emitting scheme may be kept constant and only the blue light-emitting scheme be changed.
  • Adopting this embodiment generally speaking, the change span of the lighting scheme is small, and it is convenient to align the handprint images collected at different times based on the constant lighting scheme of the same color.
  • collecting images of the handprint collection area by one or more image acquisition devices includes: at the first moment, When the blue light source group emits light with the first blue light-emitting scheme, and the green light source group emits light with the first green light-emitting scheme, at least part of the image capture devices in the one or more image capture devices are used to collect images of the handprint collection area; At the second moment, when the blue light source group emits light with the second blue light-emitting scheme, and the green light source group emits light with the second green light-emitting scheme, the handprint collection area is collected by at least part of the image collection devices in one or more image collection devices , wherein the first blue light-emitting scheme and the second blue light-emitting scheme are selected from the set of blue light-emitting schemes, the first green light-emitting scheme and the
  • the luminous intensity of each light source in the lighting system is only divided into two levels, one is that the luminous intensity is not zero, and the other is that the luminous intensity is zero. Therefore, the set of blue light emitting schemes may include at most 2 k1 blue light emitting schemes. Similarly, the set of green lighting schemes may include at most 2 k2 green lighting schemes.
  • the value of its luminous intensity can be any value, such as the maximum luminous intensity of the light source, or half of the maximum luminous intensity of the light source, and so on.
  • the lighting scheme may change.
  • the first moment and the second moment may emit light according to different blue light emitting schemes and/or different green light emitting schemes respectively.
  • the image acquisition devices that acquire the handprint images at the first moment and the second moment may be completely the same, completely different or partially the same.
  • the first blue light emitting scheme is the same as the second blue light emitting scheme, or the first green light emitting scheme is the same as the second green light emitting scheme.
  • the blue light source group includes three blue light sources B1, B2, and B3, and the green light source group includes three green light sources G1, G2, and G3, and assume that the luminous intensity of each light source is not 0 and represented by the number "1", An intensity of 0 is represented by the number "0".
  • the first blue light-emitting scheme adopted may be 100, and the first green light-emitting scheme may be 111; and at the second moment, the second blue light-emitting scheme adopted may still be 100, and the second green light-emitting scheme may be 100.
  • collecting images of the handprint collection area by one or more image acquisition devices further includes: at the third moment , when the blue light source group emits light with the third blue light-emitting scheme, and the green light source group emits light with the third green light-emitting scheme, at least part of the image capture devices in the one or more image capture devices are used to capture images of the handprint collection area,
  • the third blue light-emitting scheme is selected from the set of blue light-emitting schemes
  • the third green light-emitting scheme is selected from the set of green light-emitting schemes;
  • the third blue light-emitting scheme is at least the same as the first blue light-emitting scheme and the second blue light-emitting scheme One of them is different; and/or, the third green light-emitting scheme is different from at least one of the first green light-
  • the image acquisition device for acquiring the handprint image at the third moment may be completely the same, completely different or partially the same as the image acquisition device for acquiring the handprint image at the first moment.
  • the image acquisition device for acquiring the handprint image at the third moment may be completely the same, completely different or partly the same as the image acquisition device for acquiring the handprint image at the second moment.
  • a third acquisition is added.
  • the blue light emission scheme or the green light emission scheme may be changed relative to at least one of the first two acquisitions.
  • the lighting scheme satisfies both the first condition and the second condition;
  • the first condition includes: the first blue lighting scheme is the same as the second blue lighting scheme, or the first green lighting scheme is the same as the second green lighting scheme The scheme is the same;
  • the second condition includes: the third blue light-emitting scheme is the same as the first blue light-emitting scheme, or the third blue light-emitting scheme is the same as the second blue light-emitting scheme, or the third green light-emitting scheme is the same as the first blue light-emitting scheme
  • the green light emitting scheme is the same, or the third green light emitting scheme is the same as the second green light emitting scheme.
  • the third blue light-emitting scheme is different from the first blue light-emitting scheme and the second blue light-emitting scheme, and the first green light-emitting scheme is different from the second green light-emitting scheme.
  • the lighting schemes are different, and the third green lighting scheme is the same as the first green lighting scheme or the second green lighting scheme.
  • the first green light-emitting scheme is the same as the second green light-emitting scheme
  • the third green light-emitting scheme is different from the first green light-emitting scheme and the second green light-emitting scheme
  • the first blue light-emitting scheme is different from the second blue light-emitting scheme.
  • the schemes are different, and the third blue light-emitting scheme is the same as the first blue light-emitting scheme or the second blue light-emitting scheme.
  • the blue emission scheme is changed once, and the green emission scheme is changed once. In this way, it is convenient to collect handprint images under richer luminous conditions.
  • the number of image acquisition devices in one or more image acquisition devices is greater than 1, and all image acquisition devices are used for each acquisition.
  • All image acquisition devices are used to collect handprint images together during each collection, so that all collections can be completed only by sequentially emitting light according to all required lighting schemes. This collection scheme is more efficient.
  • the number of image acquisition devices in the one or more image acquisition devices is greater than 1, and the image acquisition devices used in at least two acquisitions among the multiple acquisitions are not exactly the same.
  • the image acquisition device at the center belongs to the first group of image acquisition devices, and the two image acquisition devices at the two sides belong to the second group of image acquisition devices. Images may be collected by the first group of image acquisition devices at the first moment, and images may be collected by the second group of image acquisition devices at the second moment. Although different image acquisition devices are used to collect images at different times, the handprint images collected by different image acquisition devices can still be spliced. This separate acquisition solution is convenient to avoid that the light sources corresponding to other image acquisition devices hinder the acquisition of the current image acquisition device.
  • one or more image acquisition devices include a first image acquisition device, a second image acquisition device and a third image acquisition device, wherein the optical axis of the first image acquisition device is aligned with the direction of the handprint acquisition area Or the planes of multiple image acquisition devices are vertical, the optical axis of the second image acquisition device forms a first preset angle with the optical axis of the first image acquisition device, the optical axis of the third image acquisition device and the first image acquisition device The optical axis forms a second preset angle; the lighting system includes one or more sets of light sources, each set of light sources includes at least one blue light source and at least one green light source, one or more sets of light sources and one or more image acquisition devices One-to-one correspondence; when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme, the images of the handprint collection area collected by one or more image acquisition devices include: at the first moment, in the blue
  • the lighting system includes a light source set and the light source set corresponds to the image acquisition device one by one, so details will not be repeated here.
  • the first image acquisition device, the second image acquisition device and the third image acquisition device can be divided into two groups, the image acquisition device located in the center is one group, and the two image acquisition devices located on both sides are another group .
  • the image acquisition device located in the center is one group
  • the two image acquisition devices located on both sides are another group .
  • any group of image acquisition devices acquires images, only the luminous intensity of the light sources in the corresponding light source set may not be 0, and the luminous intensities of the light sources in other light source sets are all 0.
  • the light source set in the center is relatively close to the light source sets on the left and right sides, it is easier to interfere with the image acquisition of the respective corresponding image acquisition devices. Emits asynchronously, that is, do not emit light at the same time. This helps to obtain a higher-precision handprint image.
  • the light source sets on both sides are relatively far away from each other, so the light source sets on both sides can optionally be made to emit light synchronously, which can shorten the time for handprint collection as much as possible.
  • the above scheme of making the first group of light source sets and the second group of light source sets emit light asynchronously can be applied when the light source corresponding to the first group of image acquisition devices affects the image acquisition of the second group of image acquisition devices and/or the second group of images In the case where the light source corresponding to the acquisition device affects the image acquisition of the first group of image acquisition devices.
  • the time difference between the first moment and the second moment may be smaller than a preset time threshold.
  • the preset time threshold may be set to any appropriate value as required, and this application is not limited thereto.
  • the preset time threshold may be 25ms.
  • the time interval between the two sets of light sources that emit light asynchronously should be set as short as possible. It is advisable to exceed 25ms. This asynchronous lighting method is especially suitable for packaged LED modules.
  • the handprint collection area is collected by the first group of image acquisition devices image, including: sending a trigger signal to the first group of image acquisition devices through the processing device at the first moment and sending a light emission control signal to the first group of light source sets corresponding to the first group of image acquisition devices; at the second moment , when the blue light source group emits light with the second blue light-emitting scheme, and the green light source group emits light with the second green light-emitting scheme, the image of the handprint collection area is collected by the second group of image acquisition devices, including: through the processing device at the second 2.
  • the lighting control signal is used to control the corresponding set of light sources to start to emit light with a non-zero intensity, and the time difference between the first moment and the second moment is equal to the sum of the acquisition time of the first group of image acquisition devices and the preset delay time.
  • Exemplary and non-limiting, can control the map of each image acquisition device by processing device 240 Image acquisition and illumination of each light source.
  • the processing device 240 may control the image acquisition device and the corresponding light source set to start working synchronously by simultaneously sending a trigger signal and a light emission control signal to the image acquisition device and the corresponding light source set.
  • the processing device 240 may first control the first group of image acquisition devices and the first group of light source sets by synchronously sending trigger signals and light emission control signals. Start to work, and then the acquisition is completed after the acquisition time of the first group of image acquisition devices, and then after a delay for a period of time (that is, the preset delay time), you can start to control the second group of images by synchronously sending trigger signals and light control signals.
  • the collection device and the second group of light sources start to work.
  • the acquisition time of the image acquisition device is generally relatively short, such as 2ms.
  • the delay time can be set to be small, and the sum of the delay time and the acquisition time of the first group of image acquisition devices is less than the above-mentioned preset time threshold.
  • the lighting system includes one or more light source sets, each light source set includes at least one blue light source and at least one green light source, and the one or more light source sets correspond to one or more image acquisition devices one by one,
  • the blue light source and the green light source in the light source set emit light synchronously, and the light-emitting periods of the blue light source and the green light source in the light source set are within the effective period of the corresponding image acquisition device. within the exposure period.
  • the luminous period of the light source refers to the period when the luminous intensity of the light source is not zero.
  • the exposure mode of any image acquisition device in one or more image acquisition devices may adopt global exposure or line exposure.
  • the exposure period of all lines in each frame of image captured by the current image acquisition device is uniform, and the uniform exposure period may be called an effective exposure period.
  • the light-emitting period of each light source in the light source set corresponding to the current image acquisition device only needs to be within the effective exposure period.
  • the light-emitting period of the light sources in the light source set corresponding to the current image capture device may be shorter than or equal to the effective exposure period during global exposure.
  • the effective exposure period may be an exposure period shared by all the rows in each frame of image.
  • Line exposure can be further divided into at least two exposure modes, one is line exposure with initial time synchronization, and the other is line exposure without initial time synchronization.
  • Fig. 17 shows a schematic diagram of image acquisition and light emitting sequence according to an embodiment of the present application.
  • the row exposure mode shown in FIG. 17 is synchronized with the initial time, that is, the initial time of the exposure of each row is the same, but the end time is different.
  • the total exposure time is 15 seconds
  • the exposure time of the first line is 1-10 seconds
  • the exposure time of the second line is 1-11 seconds
  • the exposure time of the third line is 1-12 seconds
  • Exposure times for the six rows are 1-15 seconds.
  • the effective exposure time at this time is 1-10 seconds, so the lighting period of the light source can be set within 1-10 seconds.
  • the total duration during which the luminous intensity of the light source is not 0 may be equal to or shorter than 10 seconds.
  • the initial time is different from the example shown in FIG. 17 , and other times, such as the end time of each row exposure and data output time, are similar.
  • the exposure time of the first row The exposure time of the second row is 2-11 seconds
  • the exposure time of the third row is 3-12 seconds
  • the exposure time of the sixth row is 6-15 seconds.
  • the effective exposure time at this time is 6-10 seconds, so the lighting period of the light source can be set within 6-10 seconds.
  • the total duration during which the luminous intensity of the light source is not 0 may be equal to or shorter than 5 seconds.
  • the light-emitting period of the blue light source and the green light source in the light source set is within the effective exposure period of the corresponding image acquisition device, so that different positions (for example, different rows) on the image collected by the image acquisition device are under the same lighting conditions Acquisition is obtained, so as to ensure the consistency of the imaging conditions of the acquired images, which helps to improve the effect of subsequent fusion stitching or splicing fusion.
  • the lighting system includes one or more light source sets, each light source set includes at least one blue light source and at least one green light source, and the one or more light source sets correspond to one or more image acquisition devices one by one, Methods also include:
  • the adjusted blue luminous intensity and the adjusted green luminous intensity of the light source set corresponding to the image acquisition device Based on the contrast and/or brightness of the current blue channel image and the contrast and/or brightness of the current green channel image, determine the adjusted blue luminous intensity and the adjusted green luminous intensity of the light source set corresponding to the image acquisition device, and adjust the blue luminous intensity. and adjusting the luminous intensity of the green light so that the contrast and/or brightness of the subsequent blue channel image collected by the image acquisition device under the adjusted luminous intensity of the blue light and the subsequent green channel image collected under the adjusted luminous intensity of the green light are consistent;
  • the green light source in the light source set corresponding to the image acquisition device is controlled to emit green light to the handprint collection area according to the adjusted green light emission intensity.
  • the present embodiment will be described below by taking three image acquisition devices and three light source sets as examples.
  • the images of the blue channel and the green channel in the images currently captured by the three image acquisition devices can be extracted in real time, and the contrast and/or the corresponding blue channel images and green channel images of the three image acquisition devices can be calculated. or brightness.
  • the respective brightness values can be obtained by calculating the histogram of the blue channel image and the histogram of the green channel image.
  • the adjusted blue light luminous intensity corresponding to each of the three light source sets corresponding to the three image acquisition devices may be calculated according to the contrast and/or brightness of the three blue channel images corresponding to the three image acquisition devices.
  • the adjusted green luminous intensity corresponding to each of the three light source sets corresponding to the three image acquisition devices can be calculated according to the contrast and/or brightness of the three green channel images corresponding to the three image acquisition devices. Then, when emitting light next time, control each of the three light source sets to adjust the blue light luminous intensity and adjust the green light according to the calculation above. Luminous intensity glows.
  • the above embodiments can independently adjust the light source intensity of a single channel.
  • the light sources of the same color at different collection positions can have relatively consistent contrast and/or brightness, which facilitates the subsequent stitching of images at different collection positions to obtain a better stitching effect.
  • the luminous intensity of the current blue-green light source can be adjusted based on the skin's response to light to better adapt to different skin conditions, thereby further adapting to more people.
  • collecting images of the handprint collection area through one or more image acquisition devices includes: at any target light-emitting moment , when the blue light source group emits light with a selected blue light-emitting scheme, and the green light source group emits light with a selected green light-emitting scheme, at least part of the image capture devices in one or more image capture devices are used to collect images of the handprint collection area;
  • the selected blue light-emitting scheme is selected from the set of blue light-emitting schemes
  • the selected green light-emitting scheme is selected from the set of green light-emitting schemes
  • the set of blue light-emitting schemes includes w kinds of blue light-emitting schemes, where w ⁇ 1,2 ,3,...,N t1 -1,N t1 ⁇ , N t1 is to select a gear
  • the set of green lighting schemes includes p green lighting schemes, where p ⁇ 1,2,3,...,N t2 -1,N t2 ⁇ , N t2 is to combine the gears of the luminous intensity of the green light sources in the green light source group by selecting a gear from the N j gears of the luminous intensity corresponding to the jth green light source in the green light source group
  • the above describes an embodiment where the luminous intensities of the blue light source and the green light source are respectively divided into two levels, but this is only an example.
  • the total number of gears of the luminous intensity of the i-th blue light source can be expressed as N i , where the corresponding N i of any two different blue light sources can be equal or unequal, and any blue light source The corresponding N i may be greater than or equal to 2.
  • the luminous intensity of a certain blue light source can be divided into 3 levels, and the luminous intensity of another blue light source can be divided into 4 levels.
  • the gears of the luminous intensity of all blue light sources can be combined.
  • the lighting system includes three blue light sources
  • the luminous intensity of the first blue light source is divided into 2 levels
  • the luminous intensity of the second blue light source is divided into 4 levels
  • the luminous intensity of the third blue light source is divided into 4 levels.
  • the total number of combinations N t1 can be equal to 2 ⁇ 4 ⁇ 3, that is, equal to 24.
  • the total number of gears of the luminous intensity of the jth green light source can be expressed as N j , where N j corresponding to any two different green light sources can be equal or unequal, and any green light source The corresponding N j may be greater than or equal to 2.
  • the combination of the luminous intensity of the green light source is similar to the combination of the blue light source, and will not be repeated here.
  • the blue light-emitting scheme is selected as the first blue light-emitting scheme
  • the green light-emitting scheme is selected as the first green light-emitting scheme.
  • the blue lighting scheme is selected as the second blue lighting scheme
  • the green lighting scheme is selected as the second green lighting scheme.
  • the blue lighting scheme is selected as the third blue lighting scheme
  • the green lighting scheme is selected as the third green lighting scheme.
  • the handprint images collected include at least one first handprint image, and any two of the at least one first handprint image
  • the blue light-emitting schemes adopted for the collection of the first handprint images are different from each other
  • the handprint collection method further includes: selecting a first handprint image whose image quality meets the first target requirement from at least one first handprint image for use in Perform blue-green fusion operation.
  • the handprint images collected include at least one second handprint image, and any two second handprint images in the at least one second handprint image are collected
  • the adopted green lighting schemes are different from each other, and the handprint collection method further includes: selecting a second handprint image whose image quality meets the second target requirement from at least one second handprint image for performing a blue-green fusion operation.
  • the first target requirement and the second target requirement may be the same or different.
  • image quality evaluation may be performed on each first handprint image to obtain a quality score of each first handprint image.
  • the first target requirement may include that the quality score of the first handprint image is greater than a first quality score threshold.
  • the first quality scoring threshold can be set to any size as required.
  • the image quality evaluation may include evaluating one or more of the clarity of the first handprint image, the degree of occlusion of the three-dimensional object (part or all of the user's hand), the size of the three-dimensional object, and the like.
  • image quality evaluation may be performed on each second handprint image to obtain a quality score of each second handprint image.
  • the second target requirement may include that the quality score of the second handprint image is greater than a second quality score threshold.
  • the second quality scoring threshold can be set to any size as required.
  • the image quality evaluation may include evaluating one or more of the clarity of the second handprint image, the degree of occlusion of the three-dimensional object (part or all of the user's hand), the size of the three-dimensional object, and the like.
  • the blue light source group includes 3 blue light sources in total, and the luminous intensity of each blue light source is divided into 3 levels, there are a total of 27 blue luminous schemes, except that the luminous intensity of all blue light sources is 0 In this case, there are 26 remaining blue light-emitting schemes.
  • 26 first handprint images can be collected respectively, and the first handprint image whose image quality meets the first target requirement can be selected from the 26 first handprint images. Used to perform blue-green fusion operations. The selection method for the second handprint image is similar and will not be repeated here.
  • the handprint collection method may further include: performing a blue-green fusion operation.
  • the blue-green fusion operation may include: extracting the blue channel and the green channel from the same handprint image collected by the same image acquisition device to obtain the blue channel image and the green channel image; fusing the blue channel image and the green channel image to obtain the fused handprint image corresponding to the image acquisition device.
  • the first handprint image and the second handprint image can be the same handprint picture.
  • the grayscale images of these two channels can be fused together.
  • the fusion operation may be implemented in any suitable fusion manner, and this paper will describe two exemplary fusion manners below.
  • the blue channel image, the green channel image and the fused handprint image are each divided into m units, where m is an integer greater than 1,
  • the fusion of the blue channel image and the green channel image includes:
  • the further processed blue channel image is the image obtained by further processing the blue channel image
  • the further processed green channel image is obtained by further processing the green channel image Image.
  • m can be set to any suitable value as required, and the present invention is not limited thereto.
  • the m units can be divided in any suitable manner.
  • the m units can be divided into pixels, that is, each pixel is divided into a unit, and the image can be divided into as many units as there are pixels.
  • every plurality of pixels in the image is divided into a unit.
  • the degree of pixel value fluctuation can be expressed in many ways, which mainly reflects how severe the pixel change at the current unit in the image is. For any unit, on the fused handprint image, always try to keep the pixel information from the unit whose pixel value fluctuates more violently in the blue channel image and the green channel image. The pixel contrast on the fused handprint image obtained in this way is strong, which is convenient for subsequent comparison between handprints when the handprint image is used for handprint recognition.
  • each unit in the m units only includes a single pixel, and the fluctuation degree of the pixel value of any unit is represented by the absolute value of the difference between the pixel value of the unit and the average pixel value of surrounding pixels, further Processing includes normalization,
  • the fusion of the blue channel image and the green channel image includes:
  • the first channel image and the second channel image are respectively one of the normalized blue channel image and the normalized green channel image, and the first channel image and the second channel image are different, F 1 is the pixel a 1 pixel value, F 2 is the pixel value of pixel a 2 , F 3 is the pixel value of the pixel contained in the i-th unit in the fused handprint image.
  • This embodiment belongs to a pixel-by-pixel fusion method. Since the pixel values of the grayscale images of different channels need to be summed, the grayscale images of the two channels can be normalized first. Normalization can be achieved using any suitable normalization method.
  • M can be set to any suitable value as required, and the present invention is not limited thereto.
  • the average pixel value E 1 of the 8 pixels around a 1 can be calculated, and the average pixel value E 2 of the 8 pixels around a 2 can be calculated.
  • normalizing the pixel values of the blue channel image and the green channel image includes: obtaining the first pixel value at a specific percentage of all pixel values of the blue channel image and all pixel values of the green channel image the second pixel value at a specific percentage in ; unify the first and second pixel values to the same specific pixel value; unify all pixels in the blue channel image based on the ratio between the first pixel value and the specific pixel value value to obtain a normalized blue channel image; all pixel values in the green channel image are scaled based on the ratio between the second pixel value and the specified pixel value to obtain a normalized Optimized green channel image.
  • the particular percentage can be any suitable percentage value, without limitation herein. In one example, the specific percentage is 20%.
  • the first pixel value can be obtained by taking the pixel value at the top 20% quantile from the blue channel image
  • the second pixel value can be obtained by taking the pixel value at the top 20% quantile from the green channel image.
  • the above-mentioned top 20% quantile refers to the pixel value whose pixel value is at the 20% position among all the pixel values of the entire image.
  • the first pixel value and the second pixel value may be unified to an equal specific pixel value.
  • the first pixel value can be reduced to half of its original value to 100, so that the first pixel value and the second pixel value are uniformly 100.
  • all the pixel values in the blue channel image are reduced to half of the original value, and the pixels in the green channel image are unchanged, so that the normalization of the two channels can be completed.
  • the above scales down the pixel values of the blue channel image while the pixel values of the green channel image do not
  • the variant embodiments are only examples and do not limit the present invention.
  • the present invention can also adopt other suitable normalization methods. For example, it is possible that the pixel values of the blue channel image are unchanged and the pixel values of the green channel image are enlarged, or that the pixel values of both the blue channel image and the green channel image are changed.
  • each of the m units contains a plurality of pixels
  • the fluctuation degree of the pixel value of any unit is represented by the mean square error or gradient of the unit
  • the fusion of the blue channel image and the green channel image includes: Calculate the mean square error or gradient of each of the m units in the blue channel image; calculate the mean square error or gradient of each of the m units in the green channel image; from the i-th unit in the blue channel image In the ith unit in the unit and the green channel image, select the unit with a larger mean square error or gradient; determine the pixel value of the pixel contained in the selected unit as contained in the ith unit in the fusion handprint image The pixel value of the pixel.
  • each cell divided for an image may contain a plurality of pixels.
  • the mean square error or gradient of the respective blocks can be calculated, and the block with the largest mean square error or gradient in the block is selected from the corresponding blocks of the blue channel image and the green channel image as The final blocks in the handprint image are fused.
  • the method further includes: smoothing the pixel values at the junction of adjacent units in the fused handprint image to obtain an updated fused handprint image.
  • the junction between blocks can be smoothed, otherwise the border will be more abrupt.
  • the smoothing process may be any suitable smoothing process, which is not limited herein.
  • an average operation may be performed on pixel values of a preset number of pixels at the junction of two or more blocks, and the average value may be used as a new pixel value of the preset number of pixels.
  • the handprint collection method may further include: for each of the one or more image collection devices, performing the above-mentioned blue-green fusion operation; pattern image.
  • the corresponding fused handprint image can be obtained by fusing the blue channel image and the green channel image of any image acquisition device.
  • the fused handprint image corresponding to the image acquisition device may be used as the overall handprint image.
  • the fused handprint images corresponding to the multiple image acquisition devices may be spliced together to obtain an overall handprint image.
  • the overall handprint image can be used for subsequent purposes such as handprint recognition.
  • the existing non-contact collection devices are designed to collect single-finger fingerprints, and there are no non-contact collection devices that require large collection areas such as four-finger fingerprints, double-thumb fingerprints, flat palm palmprints, and side palm palmprints. . If the existing non-contact collection device for collecting single-fingerprints is directly used to collect one or more of the aforementioned large-area handprints, the desired collection of handprints will be incomplete, or the imaging quality of the peripheral field of view will be lower than The central field of view, or the acquisition effect is not ideal due to different acquisition accuracy requirements.
  • the embodiment of the present application provides a handprint collection system.
  • the handprint collection system includes a structured light projector 100 , a plurality of fourth image collection devices 1820 and a processing device (not shown).
  • the handprint collection system described in this article can collect handprints including but not limited to single-fingerprints, double-thumbprints, four-fingerprints, flat-palm palmprints and side-palm palmprints, and is especially suitable for collecting large-area palmprints.
  • the structured light projector 100 can be aimed at the handprint collection area, and is used for emitting structured light to the handprint collection area.
  • the dotted rectangular box A in FIG. 19 exemplarily marks the handprint collection area.
  • the handprints expected to be collected may face the structured light projector 100 and the plurality of fourth image collection devices 1820 .
  • the handprint collection area shown in the figure is located above the structured light projector 100 and a plurality of fourth image acquisition devices 1820, in other embodiments not shown, the handprint collection area may also be located on the structured light projector 100 and the front side of a plurality of fourth image capture devices 1820 .
  • the structured light projected by the structured light projector 100 may be film projection structured light.
  • the structured light projector 100 may be located on the side closer to the base of the palm. In this way, the structured light projector 100 can be prevented from occupying the central area of the entire acquisition device, and image acquisition devices can be arranged in the central area, such as a plurality of fourth image acquisition devices 1820, a plurality of fifth image acquisition devices 1830 and /or the sixth image acquisition device 1840 to be mentioned below.
  • the arrangement of the above-mentioned structured light projector 100 is only an example, and it can also be arranged at any position in the peripheral area other than the central area. If allowed, the structured light projector 100 can also be arranged in the central area.
  • the structured light projector 100 When the structured light projector 100 is arranged in the peripheral area, it is required that the projection direction of the structured light and the collection surface (handprint surface) form a certain angle. Only in this way can the projection range of the structured light cover the entire handprint collection area.
  • the pattern generation unit can be combined with the converging
  • the optical axis of the lens is arranged at a certain angle, rather than being arranged vertically, so that the point on the pattern generating unit closer to the converging lens forms a projected point on the collecting surface that is farther away from the converging lens.
  • the image plane formed by the pattern on the pattern generating unit refracted by the converging lens can overlap with the collection surface as much as possible, so that the projected pattern can be as clear as possible. In this way, the out-of-focus of structured light can be avoided, and high-precision three-dimensional reconstruction of the scene can be performed.
  • the structured light projector 100 Since the structured light projector 100 is placed obliquely towards the center of the handprint collection area, the structured light projector 100 can be placed on the side close to the user, such as the side close to the root of the palm, to avoid the light emitted by the light source from irradiating the user.
  • the field of view of the plurality of fourth image capture devices 1820 can cover the entire handprint collection area, so as to facilitate the collection of images of the entire handprint collection area. After the user places a finger or/or palm on the palmprint collection area, the multiple fourth image capture devices 1820 can at least clearly capture the user's fingerprint and/or palmprint images facing the multiple fourth image capture devices 1820 .
  • the lenses of the multiple fourth image capture devices 1820 may be located in a predetermined plane, and the multiple fourth image capture devices 1820 may be respectively aimed at multiple first sub-regions in the handprint capture area.
  • the plurality of fourth image capture devices 1820 may be spaced apart from each other by a certain distance. However, much Any two adjacent ones of the first sub-areas overlap or adjoin each other, so that multiple fourth image capture devices 1820 cooperate with each other to capture images in the entire handprint capture area.
  • the optical axes of the multiple fourth image capture devices 1820 may be arranged parallel to each other. That is, the plurality of fourth image acquisition devices 1820 respectively acquire images in the plurality of first sub-regions at the same angle. Referring to FIG.
  • the number of the first image acquisition device can be arbitrary, and can be specifically set according to different usage requirements of the handprint acquisition system and different parameters of the fourth image acquisition device .
  • the plurality of fourth image acquisition devices 1820 may be respectively used to acquire images of the plurality of first sub-regions to obtain the plurality of first target images.
  • the obtained multiple first target images may respectively correspond to images of different parts of the collected user's fingerprint and/or palmprint, that is to say, two adjacent first sub-regions are adjacent, rather than overlapping.
  • the processing device may process the plurality of first target images to obtain images of the first type of handprint.
  • the first type of palmprints may include at least one of four-fingerprints, double-thumbprints, flat-palm palmprints, and side-palm palmprints.
  • the plurality of first target images may be a subset of handprint images.
  • the present application is provided with a plurality of fourth image acquisition devices 1820 to respectively acquire images of a plurality of first sub-regions, so as to obtain a plurality of first target images in a more targeted manner and ensure the clarity of the first target images.
  • the processing device may perform processing such as expansion and splicing on the multiple first target images, and the obtained handprint image after processing is a combination of the multiple first target images.
  • the processing device may process the obtained multiple first target images, and during processing, may splice better imaged parts of the multiple first target images to obtain a complete handprint image. In this way, each part of the handprint image obtained after processing has better definition and higher imaging quality.
  • multiple fourth image acquisition devices 1820 can acquire the collected three-dimensional shape data of single finger, double thumb, four fingers, flat palm or side palm, and perform unfolding processing to eliminate the blurring of hand lines. Obtain high-quality planar texture images due to surface undulations.
  • the handprint collection system of the embodiment of the present application may be non-contact, that is, the user can collect part or all of the hand over the handprint collection area without touching the collection device. In this way, the problems of high hygiene risk, low collection quality, small collection area, sensitivity to dryness and humidity of the skin, and low collection consistency caused by contact collection can be avoided.
  • the handprint collection system can also, for example, run on a smart device and realize handprint collection by touching the screen with the hand. In this case, the handprint collection system is not in contact with the hand, but the screen.
  • the handprint collection system can also be used to collect handprint images without touching the screen.
  • the handprint collection system utilizes multiple fourth image collection devices whose lenses are located in the same plane to respectively collect images of different multiple first sub-regions in the handprint collection area, and process these images through the processing device Image processing can obtain a complete handprint image, so the collection of large-area handprints is especially suitable for collecting images of four-fingerprints, double-thumbprints, flat palms and side palms.
  • this handprint collecting system can gather the image of single-finger fingerprint, in this case, finger can be placed at any position in the handprint collecting area, with better customer experience. Regardless of whether it is for large-area handprint collection or for handprint collection with a single finger placed more casually, a complete and high-quality handprint image can be obtained.
  • the distance from each of the plurality of fourth image capture devices 1820 to the handprint capture area is between 72mm-82mm. In this way, it can be ensured that the handprint collection area is within the range of better imaging effect of the fourth image collection device 1820, the definition of multiple first target images can be ensured, and the imaging quality of the handprint images can be ensured. Further, the distance between each of the plurality of fourth image capture devices 1820 and the handprint capture area may be 76mm, so as to achieve better imaging effect.
  • the focal length of each of the plurality of fourth image capture devices 1820 may be between 4-4.2mm.
  • the focal length can be understood as the distance between the focal point of the lens in the lens of the fourth image acquisition device 1820 and the rear main plane.
  • the size of the focal length can affect the proportion of the captured target in the captured picture. Setting the focal length in this way can further ensure that the handprint collection area is within the imaging range of the fourth image collection device 1820, and make the proportion of the handprint image in the first target image more moderate.
  • the f-number of each of the plurality of fourth image capture devices 1820 is not less than 4.
  • the aperture number also known as the F-number, is the ratio of the focal length of the camera lens of each camera to the diameter of the entrance pupil of the camera lens. Larger f-numbers correspond to smaller apertures, increasing depth of field and therefore room for finger movement, while still maintaining sharpness in imaging.
  • the aperture number of each of the multiple fourth image acquisition devices 1820 is not less than 4, so as to ensure the size of the space for the movement of the collected fingers or palms, and within this range, the handprint collection system can collect clear hand prints. Texture images to improve user experience.
  • the f-number of each of the plurality of fourth image capture devices 1820 may not be less than 5.6.
  • the depth of field of each of the plurality of fourth image capture devices 1820 is not less than 20mm. In this way, the thickness of the finger or palm and the height difference of surface fluctuations can be better adapted to ensure the imaging quality.
  • the optical axes of the plurality of fourth image capture devices 1820 may be located in a first plane perpendicular to the predetermined plane. That is, the lenses of the plurality of fourth image capture devices 1820 are arranged in parallel with each other, and are sequentially arranged on the same straight line.
  • the structure setting is more compact and easy to install. Moreover, it is possible to effectively prevent the view field blocking among the multiple fourth image capture devices 1820 .
  • the field of view of each of the plurality of fourth image capture devices 1820 may be greater than or equal to 125 ⁇ 70 mm.
  • a plurality of fourth image capture devices 1820 may be adjacent to or overlapped on a long side corresponding to 125 mm.
  • the long sides of the field of view ranges of the three fourth image acquisition devices 1820 can be adjacent to each other, thus forming a maximum 125 ⁇ 210 mm Field of view to suit most application scenarios.
  • Such a field of view can ensure that the combined viewing angles of multiple fourth image acquisition devices 1820 can cover a sufficiently large handprint collection area and ensure the integrity of the handprint image.
  • the diagonal field of view of each of the plurality of fourth image capture devices 1820 may be between 80°-82°. In this way, after a plurality of fourth image acquisition devices 1820 are arranged reasonably After that, the first sub-region can be covered more comprehensively. Prevent the occurrence of dead angle areas that cannot be collected, and further ensure the integrity of the collected handprint images.
  • the working wavelength of each of the plurality of fourth image capture devices 1820 may correspond to the wavelength of visible light.
  • the fourth image collection device 1820 can directly collect the visible light information of the handprint collection area, and the application range of the handprint collection system is wider.
  • each of the plurality of fourth image capture devices 1820 may not include an infrared filter. Clearer handprint images can be obtained without setting infrared filters, and the structure setting is simpler and more reasonable.
  • the resolution of each of the plurality of fourth image capture devices 1820 may be no less than 5 million pixels. Such setting of resolution can ensure the sharpness of each of the plurality of first target images, thereby ensuring the overall sharpness of the handprint image.
  • the optical distortion of each of the plurality of fourth image capture devices 1820 may be less than 5%.
  • Optical distortion is the degree of distortion of the image called by the optical system to the object relative to the object itself.
  • the optical distortion of each of the plurality of fourth image acquisition devices 1820 is less than 5%, which can better ensure the authenticity of each first target image, and further ensure the authenticity of the handprint image.
  • the interface of the lens of the fourth image acquisition device 1820 may be an M12 interface. In this way, the lens size is smaller and the structure is more compact.
  • the handprint collection system may further include a plurality of fifth image collection devices 1830 .
  • the lenses of at least a part of the multiple fifth image capture devices 1830 are located in different planes and aligned with multiple second sub-regions in the handprint capture area. In this way, these fifth image acquisition devices 1830 can better acquire handprint information on different planes, so as to obtain a full single-fingerprint image.
  • Full single-finger fingerprints include fingerprints on the front and two sides of the pad of a single finger.
  • the plurality of fifth image acquisition devices 1830 are respectively used to acquire images of the plurality of second sub-regions to obtain a plurality of second target images, and the processing device is also used to process the plurality of second target images to obtain a second type of target image. patterned image.
  • the second type of fingerprints may include full single-finger prints.
  • the plurality of second sub-regions may be located in different planes from each other. Further, the plurality of second sub-regions may be spatially separated from the plurality of first sub-regions. Alternatively, the plurality of second regions may have partial overlap with the plurality of first subregions.
  • Multiple fifth image capture devices 1830 may be used to capture 3D images of fingerprints, and multiple second sub-regions may correspond to the front and side regions of the finger, respectively.
  • the processing device may stitch the fingerprint images of the plurality of second sub-regions onto the 3D fingerprint surface of the 3D fingerprint model to obtain a 3D fingerprint image.
  • the area of the fingerprint obtained is much larger than that obtained by traditional contact, and the acquisition quality is higher.
  • the plurality of fifth image capture devices 1830 may be or include the above-mentioned first image capture device, second image capture device and third image capture device.
  • the handprint collection system includes a plurality of fourth image collection devices 1820 and a plurality of fifth image collection devices 1830. If it is necessary to collect high-quality single-fingerprints, multiple fifth image collection devices 1830 can be used for collection. In the case of a large-area fingerprint or a single-finger side fingerprint, a plurality of fourth image capture devices 1820 may be used for capture. A plurality of fourth image acquisition devices 1820 and a plurality of fifth image acquisition devices 1830 may not work at the same time, and the structured light projector 100 may operate when a plurality of fourth image acquisition devices 1820 and a plurality of fifth image acquisition devices 1830 are in operation Both cast structured light. Therefore, it can meet various collection requirements such as single finger, four fingers, double thumbs, flat palm and side palm, which makes the application range of the handprint collection system wider.
  • the plurality of fifth image capture devices 1830 may include a middle fifth image capture device 1833 , a left fifth image capture device 1831 and a right fifth image capture device 1832 .
  • the fifth image capture device 1831 on the left is located on the left side of the fifth image capture device 1833 in the middle.
  • the fifth image capture device 1832 on the right is located on the right side of the fifth image capture device 1833 in the middle.
  • the optical axes of the lenses of the left fifth image capture device 1831 and the right fifth image capture device 1832 are tilted at a set angle, respectively facing the corresponding second sub-regions.
  • the left fifth image capture device 1831 and the right fifth image capture device 1832 may be symmetrically arranged on both sides of the middle second capture device 1833 .
  • the fifth image capture device 1830 can capture images of the middle, left and right regions of the target respectively.
  • This setting is especially suitable for single-finger collection.
  • the second sub-region can include the fingerprints on the front, left and right sides of the pad of the finger, so that a clearer and more accurate 3D fingerprint image of a single finger can be obtained.
  • the lens of the middle fifth image capture device 1833 may be located in a predetermined plane.
  • the fifth image capture device 1833 in the middle and the multiple fourth image capture devices 1820 are arranged on the same plane, the structure is more compact, and the field of view between the fifth image capture device 1833 and the multiple fourth image capture devices 1820 is prevented from being blocked .
  • two or more fifth image acquisition devices 1830 may also be provided.
  • the two fifth image capture devices 1830 may be arranged at the lower left and lower right of the single finger.
  • the arrangement is more flexible.
  • the distance from each of the plurality of fifth image capture devices 1830 to the handprint capture area is between 72mm-82mm. Such setting can ensure that the handprint collection area is within the better imaging range of the fifth image collection device 1830 and ensure the clarity of multiple second target images. Further, the distance between each of the plurality of fifth image capture devices 1830 and the handprint capture area may be 76 mm to achieve better imaging effect.
  • the focal length of each of the plurality of fifth image capture devices 1830 may be between 5-7mm. It is further ensured that the handprint collection area is within the better imaging range of the fifth image collection device 1830 to ensure the imaging effect of the handprint collection system. Moreover, the proportion of the image of the handprint in the second target image is more moderate. Further, the focal length of each of the fifth image capture devices 1830 may be 6mm.
  • the field of view of each of the plurality of fifth image capture devices 1830 is greater than or equal to 40 ⁇ 40 mm. In this way, it can be ensured that the field of view ranges of the plurality of fifth image capture devices 1830 can cover a sufficiently large handprint capture area.
  • the diagonal field of view of each of the plurality of fifth image capture devices 1830 is between 61°-63°.
  • the multiple fifth image capture devices 1830 can cover multiple second sub-areas more comprehensively after being arranged, preventing the occurrence of dead angle areas that cannot be captured.
  • the diagonal field of view of each of the fifth image capture devices 1830 may be 62°.
  • the depth of field of the plurality of fifth image capture devices 1830 is not less than 20mm. In this way, it can better adapt to the thickness of fingers or palms, as well as the height difference of surface undulations, so as to ensure the imaging quality.
  • the range of field of view of each of the plurality of fifth image capture devices 1830 may be smaller than the range of field of view of each of the fourth image capture devices 1820 .
  • the fifth image acquisition device 1830 can be used to capture a smaller second sub-region
  • the fourth image acquisition device 1820 can be used to capture a relatively larger first sub-region. More detailed and more targeted image collection can be carried out in the handprint collection area, and the collection quality is higher.
  • the focal length of each of the plurality of fifth image capture devices 1830 may be greater than the focal length of each of the plurality of fourth image capture devices 1820 .
  • the second target image captured by the fifth image capture device 1830 has a larger proportion of the subject (ie, handprint) than the first target image captured by the fourth image capture device 1830 .
  • the fifth image capture device 1830 can better capture fingerprints of smaller subjects such as fingers.
  • the fourth image capture device 1820 can capture a relatively large capture subject such as palm prints of a palm.
  • the diagonal field of view of each of the plurality of fifth image capture devices 1830 may be smaller than the diagonal field of view of each of the fourth image capture devices 1820 . That is, the fifth image capture device 1830 has a smaller field of view than the fourth image capture device 1820 . In this way, the fifth image collection device 1830 collects smaller collection targets, which can reduce interference and ensure the collection quality of all single-fingerprint images.
  • the fourth image capture device 1820 can be used to capture larger capture objects. The larger field of view can ensure the integrity of the handprint image.
  • the optical axes of the plurality of fourth image acquisition devices 1820 are located in a first plane perpendicular to the predetermined plane
  • the optical axes of the plurality of fifth image acquisition devices 1830 are located in a second plane perpendicular to the predetermined plane
  • the first The plane is parallel to the second plane.
  • the center of the first sub-region captured by the fourth image capture device is spaced a certain distance from the center of the second sub-region captured by the fifth image capture device. That is, the handprint collection area can be divided into a first sub-area and a second sub-area, and the two can be set independently of each other, and can also have partial overlapping areas. In the corresponding area, more targeted handprint collection can be performed to further ensure the quality of image collection within the range of each sub-area.
  • the handprint collection system may further include a sixth image collection device 1840 .
  • the field of view of the sixth image capture device 1840 covers the entire handprint capture area. In this way, through The image of the handprint collection area can be obtained in real time through the sixth image collection device 1840 .
  • the handprint collection system may be connected with a display device or the handprint collection system itself includes a display device.
  • the images in the handprint collection area collected by the sixth image collection device 1840 can be displayed in real time by the display device to serve as a preview. It can guide users to place one finger, four fingers, two thumbs, palm or side palm in the appropriate area to improve user experience.
  • the lens of the sixth image capture device 1840 may be located in a predetermined plane.
  • the sixth image capture device 1840 can be located in the same plane as the plurality of fourth image capture devices 1820 , the structure is compact, and the field of view between the sixth image capture device 1840 and the multiple fourth image capture devices 1820 is prevented from being blocked.
  • the optical axis of the sixth image acquisition device 1840 has an included angle with the predetermined plane, so that the optical axis of the sixth image acquisition device 1840 is aligned with the center of the handprint collection area. It can be understood that, in the process of practical application, due to the limitation of the layout space in the handprint collection system, the optical axis of the third image collection device 1840 has an included angle with the predetermined plane, so that it can be compared with the fourth image The collection device 1820 and the fifth image collection device 1830 are far away from the handprint collection area. In this way, there is sufficient space in the handprint collection system for installation of multiple fourth image capture devices 1820 and multiple fifth image capture devices 1830 . It is only necessary to ensure that the center of the field of view of the sixth image capture device 1840 is aligned with the center of the handprint capture area.
  • the handprint collection system may include an illumination device 1850 .
  • the lighting device 1850 may include one or more light sources.
  • the lighting device 1850 may be the aforementioned lighting system or a part of the aforementioned lighting system.
  • the first plane and the second plane are located between the lighting device 1850 and the structured light projector 100 . It can be understood that the illuminating device 1850 and the structured light projector 100 are located in the edge areas on both sides of the handprint collection system.
  • a plurality of fourth image acquisition devices 1820 and a plurality of image acquisition devices 1830 are located in the central area of the handprint acquisition system.
  • the handprint collection system may further include a housing 1860 .
  • a plurality of fourth image capture devices 1820 may be disposed inside the casing. It can better ensure the stability of the positions of the multiple fourth image capture devices 1820, prevent accidental touches and the like, and ensure the clarity of the captured first target image.
  • a plurality of fifth image capture devices 1830 and sixth image capture devices 1840 may be disposed in the casing 1860 .
  • the structured light projector 100 can be arranged outside the housing 1860, so that the structured light projector 100 can project the structured light to the handprint collection area.
  • the handprint collection system may include a support base 1870 .
  • the housing 1860 is connected to the support base 1870 , and the structured light projector 100 can be installed on the support base 1870 .
  • the arrangement of the supporting seat 1870 can ensure the stability of the relative position between the housing 1860 and the structure projector 100 .
  • a transparent plate 270 may be provided on the housing 1860 corresponding to the handprint collection area.
  • the setting of the light-transmitting plate 270 can play a role of dust prevention.
  • the light-transmitting plate 270 can be installed in the casing 1860 at a set height, so as to avoid as far as possible the virtual image formed by the light of the illuminating device 1850 passing through the light-transmitting plate 270 from affecting the image collection process.
  • the lenses of multiple fourth image capture devices 1820, multiple fifth image capture devices 1830, and sixth image capture devices 1840 are arranged in a preset plane, and the virtual images formed on the light-transmitting plate 270 The positions are more concentrated, and it is easier to determine the position of the light-transmitting plate 270 to avoid the interference of the virtual image formed on the light-transmitting plate 270 .
  • the multiple fourth image capture devices 1820 , the multiple fifth image capture devices 1830 and the sixth image capture devices 1840 are installed more centrally, which can make the installation of the light-transmitting plate 270 more convenient.
  • the transparent plate 270 may be made of glass.
  • the handprint collection system may also be provided with a top cover.
  • the light source of the lighting device 1850 can prevent the user's eyes from being stimulated.
  • a handprint collection system is provided.
  • the handprint collection system of the embodiment of the present application can be used for simultaneous collection of four-finger fingerprints, simultaneous collection of double-thumb fingerprints, or palmprint collection.
  • different parts of multiple fingers or palmprints are more likely to have tilt angles (including pitch angles, roll angles, and yaw angles) when collecting multi-fingerprints or palmprints, resulting in partial areas
  • the captured image is not clear because it is not within the clear imaging range of the image capture device.
  • the handprint collection system 200 has a handprint collection area 300 .
  • the size of the handprint collection area 300 usually needs to fit the size of the four fingers of the palm.
  • the length of the handprint collection area 300 may be 110mm-130mm, and the width of the handprint collection area 300 may be 105mm-125mm.
  • the handprint collection system 200 may include a plurality of seventh image collection devices 2220 . It can be understood that the plurality of seventh image capture devices 2220 may include two, three, four or more seventh image capture devices, which is not limited here.
  • the plurality of seventh image acquisition devices 2220 may adopt industrial cameras or camera modules or other types of image acquisition devices, which are not limited here.
  • the plurality of image capture devices includes two seventh image capture devices. Regardless of the number of seventh image capture devices 2220 , the lenses 121 of a plurality of seventh image capture devices 2220 can be arranged around the centerline and face the handprint capture area 300 . It should be noted that the centerlines mentioned here and below refer to the centerlines of the plurality of seventh image capture devices 2220 . As shown in FIGS. 24 , 27 and 29 , the centerline is a straight line MN.
  • the lenses 121 of the plurality of seventh image acquisition devices 2220 surround the centerline MN, so that the fields of view of the plurality of seventh image acquisition devices are basically overlapped.
  • the central line MN may be substantially parallel to the line connecting the center of the handprint collection area 300 and the center of the whole composed of a plurality of seventh image collection devices 2220 .
  • the lenses 121 of a plurality of seventh image acquisition devices 2220 face the handprint collection area 300, like this, after the user puts the hand into the handprint collection area 300, the lenses 121 of the plurality of seventh image acquisition devices 2220 are convenient to capture the area in the handprint collection area. 300 user's handprints.
  • the plurality of seventh image acquisition devices 2220 respectively have object surface subspaces that can be clearly imaged within the range of depth of field from their best object front and back. That is to say, each of the seventh image acquisition devices 2220 has its own corresponding subspace of the clearly imageable object surface.
  • the optimal object plane of the seventh image capture device 2220 generally refers to the object plane conjugate to its image plane. Typically, when the focal length of the lens of the image acquisition device is constant, the image distance of each image acquisition device is basically a constant value, so the image plane is also determined, so the optimal object plane can be determined. Depth of field range in front of and behind the best subject is still ok For clear imaging, the space within the depth of field in front of and behind the best object is the clear imaging object surface subspace.
  • the clearly imageable object surface subspaces of the plurality of seventh image acquisition devices 2220 partially overlap.
  • the clear imageable object surface subspaces of the plurality of seventh image acquisition devices 2220 may jointly form a clear imageable total space.
  • two seventh image acquisition devices 2220 are included.
  • the clearly imageable object surface subspaces of the two seventh image capture devices 2220 generally overlap along the extension direction of the centerline, and the clearly imageable object plane subspaces formed by them intersect.
  • the total space is larger than any one of the clearly imageable object surface subspaces.
  • the handprint collection area 300 may include a total space where an object plane can be clearly imaged.
  • the captured handprint images can be made clear through the lenses 121 of the plurality of seventh image collection devices 2220 without contact.
  • the handprint collection area 300 may be the total space where the object plane can be clearly imaged. In this way, when the user places his hand in any area of the handprint collection area 300, there is an image collection device that can collect clear handprint images.
  • the handprint collection area 300 can also be slightly larger than the total space of the object surface that can be clearly imaged.
  • the user will not place the hand close to the edge of the handprint collection area 300 to avoid contact with the surrounding area.
  • the shell of the handprint collection area 300 and the like. Taking this factor into consideration, the handprint collection area 300 may also be slightly larger than the total space of the clear imaging object plane.
  • the optimal object planes of the plurality of seventh image acquisition devices 2220 have different positions in the handprint acquisition area 300, and the subspaces of the clearly imageable object planes of the plurality of seventh image acquisition devices 2220 partially overlap, which can ensure Clear imaging At any position in the total space of the object plane, at least one seventh image capture device 2220 can capture a clear image of the handprint.
  • the handprint collection system 200 of the embodiment of the present application can collect handprints without contact. Through partial overlapping of the clearly imageable object plane subspaces of multiple seventh image capture devices 2220, and the different positions of the optimal object planes of multiple seventh image capture devices 2220 in the handprint collection area 300, the maximum expansion can be achieved.
  • the total space that can be clearly imaged by the plurality of seventh image acquisition devices 2220 makes the acquisition range of the handprint acquisition area 300 wider. As long as the user puts the hand into the handprint collection area 300, a clear image can be formed to improve the clarity of the handprint collection to avoid that the handprint image is unavailable in subsequent applications such as identification and reduce the impact on the cooperation of the user. Require.
  • the handprint collection system 200 may further include a housing 1860 .
  • the shape of the casing 1860 can be varied, which is not limited here.
  • the housing 1860 may be substantially in the shape of a cuboid.
  • a plurality of seventh image capture devices 2220 may be located within the housing 1860 .
  • the handprint collection system 200 may also include a cover 180 .
  • the cover 180 may be located above or below the housing 1860 .
  • the handprint collection area 300 can be located in the cover body 180 .
  • the cover body 180 and the housing 1860 can be integrally formed or connected together after separate processing, which is not done here limited.
  • the surface of the housing 1860 facing the handprint collection area 300 can transmit light.
  • the handprint collection area 300 is defined by the enclosure of the top plate 171 of the housing 1860 and the cover 180 .
  • the top plate 171 forms a surface facing the handprint collection area 300 .
  • a light-transmitting opening 172 may be disposed on the top plate 171 of the housing 1860 .
  • the top plate 171 can transmit light through the light-transmitting opening 172 .
  • part or all of the top plate 171 may be formed of a transparent material such as glass, thereby realizing light transmission.
  • light transmission can also be achieved by removing the top plate 171 . In this way, the casing 1860 can protect the plurality of seventh image capture devices 2220 .
  • the cover body 180 is equivalent to forming a dedicated area for the handprint collection area 300, which is convenient for users to identify and operate.
  • the shell 1860 and the cover 180 can also make the overall appearance of the handprint collection system 200 more beautiful, improving the experience of the user.
  • the cover body 180 can avoid the glare problem of supplementary light, and can bring good experience to users.
  • the handprint collection system 200 may further include a status indicator light 191 .
  • the status indicator light 191 can be a multi-color indicator bar, and the colors include but not limited to red, green, and blue. In this way, the state of the handprint collection system 200 is prompted to the user through the color change of the state indicator light 191 .
  • the handprint collection system 200 may further include a collection guide light 192 .
  • the collection guide light 192 can be located around the handprint collection area 300 , and is used to show the range of the handprint collection area 300 to the user, prompting the user to place four fingers or two thumbs in the handprint collection area 300 . In this way, the collection guide light 192 can guide the user to place the finger to be collected in the handprint collection area 300 more quickly, which improves the collection efficiency of the handprint collection system 200 .
  • the handprint collection system 200 may further include a liquid crystal display 193 .
  • the liquid crystal display 193 can be used as an interaction with the user and provide some prompts to the user, such as playing and collecting animations, displaying preview images in real time, guiding posture adjustment in real time, displaying collection results, etc.; Intuitive sense of collection.
  • the focal lengths of the lenses 121 of the plurality of seventh image capture devices 2220 may be different. With different focal lengths, even if the seventh image capture devices 2220 have the same distance to the handprint collection area 300 , the optimum object planes of multiple seventh image capture devices 2220 can be located at different positions in the handprint collection area 300 . That is to say, different focal lengths provide a basis for setting multiple seventh image capture devices 2220 at the same position.
  • the focal length of one seventh image acquisition device may be 8mm, and the focal length of the other seventh image acquisition device may be 6mm .
  • the field of view of the two cameras is not much different, thereby ensuring that the resolution (DPI) of the handprint collection system 200 will not vary with the distance between each position and the camera while monotonically decreasing rapidly.
  • DPI resolution
  • the lenses 121 of the plurality of seventh image acquisition devices 2220 have different focal lengths, it is possible to simply and easily realize the optimal object of the plurality of seventh image acquisition devices 2220.
  • the positions of the faces in the handprint collection area 300 are different. Also, as will be mentioned later, it is easier to set up the fill light.
  • the distances from the front ends of the lenses 121 of the plurality of seventh image capture devices 2220 to the handprint capture area 300 may be different.
  • the azimuthal term “front end” mentioned here and below refers to the end close to the handprint collection area 300 .
  • the handprint collection area 300 is defined by the enclosure of the top plate 171 of the housing 1860 and the cover 180 .
  • the distance from the front end of the lens 121 to the handprint collection area 300 can be understood as the distance from the handprint collection area 300 to the top plate 171 .
  • the lens 121 faces upwards to capture the handprints. In other embodiments not shown, the lens 121 can also face down or to the side to capture the handprints.
  • the handprint collection area 300 is relatively The position of the lens 121 also needs to be adjusted accordingly. But generally, the distance from the front end of the lens 121 to the handprint collection area 300 can be understood as the distance from the front end of the lens 121 to the partition between the seventh image acquisition device 2220 and the handprint collection area 300 .
  • the distances from the front ends of the lenses 121 of the plurality of seventh image acquisition devices 2220 to the handprint acquisition area 300 can be different, even if the parameters such as the focal length, image distance, and depth of field of these seventh image acquisition devices 2220 are substantially the same, it is possible to make Their optimal object planes are located at different positions of the handprint collection area 300, so as to realize stitching of subspaces that can clearly image the object plane. Therefore, the same seventh image capture device 2220 can be selected instead of customizing different image capture devices, thereby reducing the cost of processing and storage. Moreover, since the image acquisition devices are all of the same type, assembly errors due to wrong type of image acquisition devices can also be avoided.
  • a plurality of image acquisition devices with different focal lengths and different distances from the front end of the lens to the handprint acquisition area can be selected, which can maximize the total clear imaging space formed by their combination.
  • the optical axes of the lenses 121 of the plurality of seventh image capture devices 2220 may be inclined toward the central line MN.
  • the optical axis can be shown as OP and ST as shown in FIG. 29 .
  • the angle between the optical axes OP and ST and the central line MN may be less than or equal to 10 degrees.
  • the lenses 121 of the plurality of seventh image collection devices 2220 are all inclined towards the center line MN, which can ensure that the field of view of each image collection device is approximately Overlapping, the hand is located in the center of the field of view of each seventh image acquisition device 2220, thereby ensuring the imaging quality.
  • the present application does not exclude the situation that the optical axes OP and ST are both parallel to the central line MN.
  • the distance from the center of the field of view on the optimal object plane of the plurality of seventh image capture devices 2220 to the center line MN is less than or equal to the first preset threshold.
  • the seventh image acquisition device 2220 can take a picture of the hand, so as to ensure that the hand is positioned at the other side. Center of vision.
  • the first preset threshold is 0-10 mm; preferably, the first preset threshold may be 0-5 mm; further preferably, the first preset threshold may be 0-2 mm.
  • the handprint collection system 200 may further include a first supplementary light 2230 .
  • the first supplementary light 2230 may be a supplementary light provided in an integral or split structure, which is not limited here.
  • the first fill light 2230 may be located around the plurality of seventh image capture devices 2220 .
  • the light-emitting surface of the first supplementary light 2230 may face the handprint collection area 300 .
  • the first supplementary light 2230 can be generally in various shapes such as a circle, a square, a rhombus, or an ellipse, which is not specifically limited here.
  • the first supplementary light 2230 can be controlled in any suitable manner such as automatic or manual, which will not be described here.
  • the first supplementary light 2230 can provide auxiliary light for the surrounding environment under the condition of lack of light.
  • the first supplementary light 2230 can also have the ability to adjust brightness, color temperature and so on.
  • the first supplementary light 2230 is mainly for supplementary light at the bottom of the palm lines such as four fingers or two thumbs. In this way, when the light is insufficient, the first supplementary light 2230 can provide auxiliary light for the handprint collection area 300 and the surroundings of the plurality of seventh image acquisition devices 2220, so as to facilitate the acquisition of the lenses 121 of the plurality of seventh image acquisition devices 2220.
  • a better shooting environment is provided, which is more conducive to collecting clear handprint images in any environment, and reduces the noise of the collected images caused by insufficient light; thereby improving the collection effect of the handprint collection system 200;
  • the applicable scenarios of the handprint collection system 200 are expanded, and the versatility of the handprint collection system 200 is improved.
  • the handprint collection system 200 may further include a light-transmitting plate 140 .
  • the light-transmitting plate 140 can be generally in various shapes such as square, circular, or elliptical, which is not limited here.
  • the plurality of seventh image capture devices 2220 and the handprint capture area 300 may be respectively located on both sides of the light-transmitting plate 140 . It can be known from the foregoing that the plurality of seventh image capture devices 2220 includes the lens 121 . Lens 121 usually requires careful maintenance.
  • the plurality of seventh image acquisition devices 2220 and the handprint collection area 300 can be respectively located on both sides of the light-transmitting plate 140, and the plurality of seventh image acquisition devices 2220 can pass through the light-transmitting plate 140 to collect and collect handprints in the handprint collection area 300. shoot. Moreover, the light-transmitting plate 140 can also divide a plurality of seventh image acquisition devices 2220 and the handprint collection area 300 on its two sides, which is equivalent to forming two separate spaces.
  • the centers of the front ends of the lenses 121 of the plurality of seventh image acquisition devices 2220 are located at the center line MN vertical to the lens plane.
  • the optical axes of the lenses 121 of the plurality of seventh image acquisition devices 2220 can be inclined toward the center line MN, so that the plurality of lenses 121 will also be inclined to each other, and the planes of the plurality of lenses 121 may not be coplanar, and for the convenience of description, more A plane where the center of the front end of the lens 121 of the seventh image capture device 2220 is located and perpendicular to the centerline MN is defined as a lens plane.
  • the distance from the light emitting surface of the first fill light 2230 to the lens plane is less than or equal to the second preset threshold.
  • the first supplementary light 2230 when the distance from the light-emitting surface of the first supplementary light 2230 to the lens plane is greater than the second preset threshold, the first supplementary light 2230 will provide a sufficient response to the lenses 121 of the plurality of seventh image capture devices 2220 in the case of insufficient light. The collection effect will be significantly reduced.
  • the light emitting surface of the first fill light 2230 may be coplanar with the lens plane. In this way, the first supplementary light 2230 can further provide better supplementary light effects for the lenses 121 of the plurality of seventh image capture devices 2220 and reduce lamp shadows.
  • the first fill light 2230 may be in a ring shape surrounding a plurality of seventh image capture devices 2220 .
  • the first supplementary light 2230 can provide supplementary light in the general circumferential direction of the plurality of seventh image capture devices 2220.
  • the user's hand will be placed in the central area of the handprint collection area 300 for shooting
  • the first supplementary light 2230 can be arranged in a ring shape surrounding the plurality of seventh image acquisition devices 2220 and can be arranged around the hand, thereby providing uniform supplementary light from all directions, avoiding the contrast between light and dark in the handprint image, and further improving image quality.
  • the first supplementary light 2230 may include a first C-shaped light board 131 and a second C-shaped light board 132 . Openings of the first C-shaped lamp panel 131 and the second C-shaped lamp panel 132 are opposite and surrounded to form a ring. It can be understood that the first C-shaped lamp panel 131 and the second C-shaped lamp panel 132 may be in the same semi-circular shape, or one may be a superior arc and the other may be a inferior arc, which is not limited here.
  • the ring may have an inner diameter of about 80-90 mm and an outer diameter of about 100-110 mm.
  • the first fill light 2230 can be replaced relatively easily by two C-shaped lamp panels enclosing each other to form a ring-shaped first fill light 2230; if one of the C-shaped lamp panels fails, only the corresponding one needs to be replaced.
  • the C-shaped light board is sufficient, and there is no need to replace the entire first supplementary light 2230, which reduces maintenance costs.
  • the shape of the C-shaped light board is smoother, and there are almost no abrupt edges and corners, which is equivalent to protecting multiple seventh image acquisition devices 2220 .
  • the ring shape may have a notch 133 .
  • the notch 133 can be through or a notch 133 connected with a ring, which is not limited here.
  • the handprint collection system 200 may further include a structured light projector 100 .
  • the structured light transmitter 100 may adopt various structured light projectors 100 known in the prior art or may appear in the future, which is not limited here.
  • the projecting end of the structured light projector 100 can be disposed in the notch 133 . In this way, the light emitted from the projecting end of the structured light transmitter 100 can be emitted through the gap 133 .
  • the handprint collection system 200 may further include a base 194 .
  • the structured light projector 100 and the plurality of seventh image acquisition devices 2220 may both be supported on the base 194 . In this way, base 194 can support and secure A fixed structured light projector 100 and a plurality of seventh image acquisition devices 2220.
  • the structured light projector 100 can cooperate with multiple seventh image acquisition devices 2220 to form three-dimensional topography data of fingers, making the handprint information collected by the handprint collection system 200 more three-dimensional and vivid.
  • the structured light projector 100 can cooperate with multiple seventh image acquisition devices 2220 to perform real-time posture detection of the finger, and guide the user to adjust the posture of the finger.
  • the distance between the first fill light 2230 and the plurality of seventh image capture devices 2220 is less than or equal to a third preset threshold.
  • the third preset threshold may be 0-10mm; preferably, the third preset threshold may be 0-5mm; further preferably, the third preset threshold may be 0-2mm.
  • the distance between the first supplementary light 2230 and the plurality of seventh image capture devices 2220 is large, the light emitted by the first supplementary light 2230 may not be able to illuminate the user's hand vertically. The light is deformed after being refracted by the transparent plate 270 between the seventh image acquisition device 2220 and the handprint collection area 300 , or the imaging is affected due to the light reflected by the transparent plate 270 .
  • the handprint collection system 200 may further include a second supplementary light 2260 .
  • the second supplementary light 2260 can generally be in various shapes such as strip, circle, square, rhombus, or ellipse, which are not specifically limited here.
  • the projection of the second fill light 2260 in the reference plane perpendicular to the central line MN may be located in the edge area and/or peripheral area of the projection of the handprint collection area 300 in the reference plane.
  • the second supplementary light 2260 can be prevented from entering the field of vision of the plurality of seventh image capture devices 2220 and the light of the first supplementary light 2230 can be avoided.
  • the second fill light 2260 is mainly for the side fill light of, for example, four fingers or two thumbs.
  • the second fill light 2260 and the plurality of seventh image capture devices 2220 are located on the same side of the handprint capture area 300 . For example, it is located below the handprint collection area 300 .
  • the second supplementary light 2260 may be located between the handprint collection area 300 and the plurality of seventh image collection devices 2220 .
  • the second fill light 2260 can be inclined towards the center of the handprint collection area 300 .
  • the angled second fill light 2260 can illuminate the side of the hand vertically, thus maximizing the brightness of the part being photographed.
  • the handprint collection system 200 may further include a light-transmitting plate 270 .
  • the light-transmitting plate 270 can be generally in various shapes such as square, circular, or oval, which is not limited here.
  • the plurality of seventh image capture devices 2220 and the handprint capture area 300 may be respectively located on both sides of the light-transmitting plate 270 .
  • the lenses 121 of the plurality of seventh image capture devices 2220 generally require careful maintenance.
  • the plurality of seventh image acquisition devices 2220 and the handprint acquisition area 300 can be respectively located on both sides of the light-transmitting plate 270, and the plurality of seventh image acquisition devices 2220 can collect images of objects in the hand-print acquisition area 300 through the light-transmitting plate 270 .
  • the light-transmitting plate 270 can also divide a plurality of seventh image acquisition devices 2220 and the handprint collection area 300 on its two sides, which is equivalent to forming two separate spaces.
  • the second supplementary light 2260 may be located between the handprint collection area 300 and the light-transmitting plate 270 .
  • the sequence between the handprint collection area 300 and the plurality of seventh image acquisition devices 2220 is the handprint collection area 300, the second supplementary light 2260, the light-transmitting plate 270, and the plurality of seventh image acquisition devices 2220. Seven image acquisition devices 2220 .
  • the second supplementary light 2260 can provide supplementary light to the collection environment of the plurality of seventh image collection devices 2220 above the light-transmitting plate 270 .
  • the second supplementary light 2260 and the first supplementary light 2230 can respectively monitor the collection environment of the plurality of seventh image collection devices 2220 at high and low positions.
  • the handprint collection system 200 can obtain a handprint image with better contrast.
  • the effect of the second supplementary light 2260 is more significant.
  • the second fill light 2260 is closer to the handprint collection area 300 than the light-transmitting plate 270 , which can avoid forming a shadow of the second fill light 2260 on the light-transmitting plate 270 and affecting the image collection effect.
  • the second supplementary light 2260 may include a first lamp board 2261 , a second lamp board 2262 and a third lamp board 2263 .
  • the projection of the first light board 2261 may be located on the fingertip side of the handprint collection area 300 .
  • the projection of the second light board 2262 may be located on the left side of the handprint collection area 300 .
  • the projection of the third lamp board 2263 may be located on the right side of the handprint collection area 300 .
  • the first lamp panel 2261, the second lamp panel 2262 and the third lamp panel 2263 may form a "door"-shaped structure.
  • the “gate”-shaped structure generally encloses the projection of the handprint collection area 300 on the reference plane.
  • the first light board 2261 , the second light board 2262 and the third light board 2263 may be connected as one or set independently of each other, which is not limited here. It can be understood from the foregoing that the opening of the “door”-shaped structure may be roughly consistent with the direction in which the fingers enter and exit the fingerprint collection area 300 . In this way, the first light board 2261 can supplement the light on the fingertip side of the finger placed in the handprint collection area 300 .
  • the second light board 2262 can supplement the light on the left side of the finger placed in the handprint collection area 300 .
  • the third light board 2263 can supplement the light on the right side of the finger placed in the handprint collection area 300 .
  • the combined effect of the three can make the second supplementary light 2260 provide supplementary light to the side of the finger, which improves the collection effect of the handprint collection system 200 .
  • FIG. 28 there is a first angle ⁇ between the first light board 2261 and the handprint collection area 300 .
  • first angle ⁇ between the first light board 2261 and the handprint collection area 300 .
  • second included angle ⁇ between the second light board 2262 and the handprint collection area 300 .
  • third angle ⁇ between the third lamp board 2263 and the handprint collection area 300 .
  • the first included angle ⁇ is smaller than the second included angle ⁇ .
  • the second included angle ⁇ is equal to the third included angle ⁇ .
  • the first included angle ⁇ may be 15-20°.
  • the second included angle ⁇ may be 30-40°.
  • the included angle (obtuse angle) between the finger surface on the left and right sides of the finger and the finger surface of the finger pulp is substantially the same, so the second included angle ⁇ may be equal to the third included angle ⁇ .
  • the angle (obtuse angle) between the finger surface of the fingertip and the finger surface of the finger pulp is relatively large, compared with the angle (obtuse angle) between the finger surface of the left and right sides of the finger and the finger surface of the finger pulp Smaller, the first included angle ⁇ is smaller than the second included angle ⁇ and the third included angle ⁇ , so that the first light board 2261, the second light board 2262 and the third light board 2263 can all illuminate the corresponding side of the finger vertically, Thereby increasing the brightness of the area being photographed.
  • the second supplementary light 2260 can generally provide good supplementary light to the sides of the fingers in the clear imaging space, which improves the supplementary light effect; and further improves the overall collection effect of the handprint collection system 200 .
  • the handprint collection system 200 may further include an eighth image collection device (not shown in the figure).
  • the eighth image capture device may be located on the side of the plurality of seventh image capture devices 2220 .
  • the optical axis of the lens 121 of the eighth image capture device can be inclined towards the centerline MN, and is used to capture images in the handprint capture area 300 .
  • the eighth image acquisition device can better acquire the handprint information on a plane different from the plane captured by the seventh image acquisition device 2220, so as to obtain a full single-fingerprint image of each finger.
  • Full-single-fingerprints include fingerprints on the front and two sides of the finger pad of a single finger, and can process the image of a full-single-finger fingerprint to obtain a fingerprint that simulates rolling and printing.
  • a plurality of seventh image capture devices 2220 may be used to capture at least one of four-fingerprints, double-thumbprints, flat-palm palmprints, and side-palm palmprints. When the seventh image capture device 2220 is used in conjunction with the eighth image capture device, all single finger prints can be photographed.
  • the processing device may stitch the images collected by the seventh image collection device 2220 and the eighth image collection device onto the 3D fingerprint surface of the 3D fingerprint model to obtain a 3D fingerprint image.
  • the area of the fingerprint obtained is much larger than that obtained by traditional contact, and the acquisition quality is higher.
  • the eighth image capture device may also be located on the side of the fingertips of the plurality of seventh image capture devices 2220, and the user captures the fingerprints at the fingertips.
  • the eighth image capture device can also be located on the root side of the plurality of seventh image capture devices 2220, and is used to capture fingerprints at the heel of the finger (that is, the part above the first knuckle, closest to the first knuckle line) . Or simultaneously include any combination of the above-mentioned eighth image acquisition device. It is not intended herein to limit the number of the eighth image acquisition device.
  • the handprint collection system includes multiple seventh image collection devices 2220 and eighth image collection devices. , flat palm and side palm and other collection requirements make the application of the handprint collection system wider.
  • each group corresponds to a seventh image acquisition device 2220 . That is to say, the number of groups of the eighth image capture device is the same as the number of the seventh image capture device 2220 , and there is a one-to-one correspondence.
  • Each group of eighth image acquisition devices may be used to capture images in the clearly imageable subspaces of the corresponding seventh image acquisition devices.
  • the eighth image capture device group and the seventh image capture device corresponding to each other have matching sharply imageable subspaces.
  • each group may include two eighth image capture devices, which are respectively located on the left and right sides of the plurality of seventh image capture devices 2220, and are used to cooperate with the corresponding seventh image capture devices 2220 to capture full single-finger images. .
  • each group may also include one eighth image acquisition device or more eighth image acquisition devices.
  • a group of eighth image capture devices corresponding to the seventh image capture device 2220 can simultaneously capture images of the user's hand, thereby ensuring the The seventh image acquisition device 2220 and a corresponding group of eighth image acquisition devices can clearly image, thereby ensuring that the front and side handprints can be taken photo effects.
  • the eighth image acquisition device is a group.
  • the handprint collection system 200 may also include an angle adjustment device (not shown in the figure).
  • the angle adjustment device may be connected to the eighth image capture device for making the optical axis of the lens 121 of the eighth image capture device have at least N tilt angles with respect to the central line MN.
  • n belongs to ⁇ 1, 2, 3, . . . , N-1, N ⁇ .
  • the eighth image acquisition device is used to capture the clear imaging subspace of the nth seventh image acquisition device.
  • the seventh image acquisition device is 3, that is, N is 3;
  • the eighth image acquisition device is a group;
  • the angle adjustment device can make the optical axis of the lens of the eighth image acquisition device relatively There can be at least 3 inclination angles on the centerline MN; when located at the first inclination angle, the eighth image acquisition device can be used to shoot the clear imaging subspace of the first seventh image acquisition device; When tilted at an angle, the eighth image capture device can be used to shoot the clear imaging subspace of the second and seventh image capture devices; at the third tilt angle, the eighth image capture device can be used to shoot the third Seven subspaces that can be clearly imaged by the image acquisition device.
  • clear handprint images can be taken with all the seventh image acquisition devices 2220, which reduces the configuration of the eighth image acquisition device and reduces the number of parts , reduces the cost; also reduces the space it occupies, optimizes the structure of the handprint collection system 200, and facilitates its miniaturization design.
  • the handprint collection system of the embodiment of the present application can be used for simultaneous collection of four-finger fingerprints, simultaneous collection of double-thumb fingerprints, or palmprint collection.
  • different parts of multiple fingers or palmprints are more likely to have tilt angles (including pitch angles, roll angles, and yaw angles) when collecting multi-fingerprints or palmprints, resulting in partial areas
  • the captured image is not clear because it is not within the clear imaging range of the image capture device.
  • the user is required to place the hand at a specific height, and there are many restrictions on the posture of the hand, so that the handprint can be clearly imaged.
  • an embodiment of the present application provides a non-contact method for collecting handprints of a target object.
  • the device for realizing the non-contact target object's handprint collection method may be the above-mentioned handprint collection system 200 .
  • the device implementing the non-contact target object handprint collection method may be a handprint collection system 200 as shown in FIGS. 22-29 .
  • the handprint acquisition system may include a preview image acquisition device, one or more preset image acquisition devices (for example, multiple image acquisition devices that can clearly image the overlapping subspaces of the object surface), wherein the preview image acquisition device may be one or more One of the preset image capture devices may also be a separate image capture device independent of one or more preset image capture devices.
  • the preview image capture device and/or the preset image capture device may be any type of image capture device, such as an industrial camera, a camera module, and the like.
  • Fig. 30 shows a schematic flowchart of a method 3000 for collecting non-contact handprints of a target object according to an embodiment of the present application. Step S3030 and processing step S3040.
  • Acquisition step S3010 using the preview image acquisition device to acquire a preview handprint image for the handprint collection area, the handprint collection area is used to place hands, and the preview image includes the handprint of the target object,
  • the target object includes at least one of the four fingers of the hand, the two thumbs, and the palm.
  • the aforementioned three-dimensional target may be or include a target object.
  • One or more image capture devices in the handprint capture system 200 may include preview image capture devices, and target images captured by the one or more image capture devices may include preview handprint images.
  • the target object When collecting the handprint image of the target object, the target object can be put into the handprint collection area of the handprint collection system 200 at one time.
  • the target object is four fingers
  • the four fingers can be put into the handprint collection area at the same time, instead of a single finger being put in four times.
  • the fingerprint images of the four fingers can be obtained at one time, which improves the collection efficiency and reduces the User cooperation times.
  • the preset image capture device may be the seventh image capture device 2220, and the preview image capture device may be one of the preset image capture devices 2220, or other separate image capture devices (such separate image capture devices are not shown in FIG. 25 ). image capture device).
  • the preview image of the handprint of the target object can be collected by a preview image collection device.
  • the preview image collection device can obtain the user's preview handprint image at the current collection time (for example, time t).
  • Attitude judging step S3020 determine whether the posture of the posture judging object is qualified based on the previewed handprint image, if qualified, go to the acquisition step S3030; wherein the posture includes at least one of the following posture indicators: position, yaw angle, roll angle and the pitch angle; wherein, the attitude judgment object is the target object or the processing object.
  • the gesture judgment object may be a target object or a processing object.
  • a processing object can represent a single finger or a palm contained in a target object.
  • the posture judgment object may be the target object (four fingers) or the processing object (single finger).
  • the posture judgment object is four fingers, the four fingers as a whole can conclude that the posture is qualified or unqualified.
  • the posture judgment object is a single finger, a conclusion of qualified or unqualified posture will be given for the single finger.
  • the posture judgment object may be the target object (double thumbs) or the processing object (single thumb).
  • the gesture judgment object can only be the palm.
  • the pose determination object may be a target object. Judging whether the posture of the target object is qualified or not may include judging whether one or more of the position, yaw angle, roll angle, and pitch angle of the target object in the previewed handprint image are qualified or not. In this case, the posture of the target object is considered qualified only if the overall posture of the target object (for example, all four fingers) is qualified, and the subsequent acquisition step S3030 can be performed. If the posture of any one or more processing objects in the target object is unqualified, it may be considered that the posture of the target object is unqualified, and the subsequent acquiring step S3030 is not performed.
  • the above position may indicate whether the position of the target object in the previewed palmprint image is to the left or to the right, whether it is in the central area of the previewed palmprint image, and so on.
  • the acquisition step S3030 is to acquire the to-be-processed palmprint image corresponding to the processing object included in the target object;
  • the to-be-processed handprint image includes the processing object with a qualified posture, and the processing object is a single finger or palm in the target object;
  • the to-be-processed handprint image It is determined according to the target handprint image corresponding to the processing object;
  • the target handprint image is collected by the target image acquisition device for the handprint collection area within the specified time interval from the preview handprint image collection time;
  • the target handprint image includes the target The handprint of the object;
  • the clearly imageable object surface subspace of the target image acquisition device matches the height of the object to be processed.
  • One or more image acquisition devices in the handprint collection system 200 may include target image capture devices, and target images captured by the one or more image capture devices may include target handprint images.
  • the to-be-processed handprint image corresponding to the processing object is acquired. If the postures of all processing objects corresponding to the target object are qualified, the to-be-processed handprint images corresponding to all the processing objects can be obtained; if the postures of some processing objects corresponding to the target object are qualified, the to-be-processed handprint images corresponding to this part of the processing objects can be obtained.
  • the handprint image to be processed is determined according to the target handprint image.
  • the target handprint image includes the handprint of the target object, the to-be-processed handprint image only includes a single processing object, and the to-be-processed handprint image is determined according to the target handprint image.
  • the target handprint image includes four fingers, and the to-be-processed handprint image is an image that includes only the middle finger and is intercepted from the target handprint image including four fingers.
  • the palmprint image to be processed may be an image of the entire finger of the single finger, an image of the fingerprint region of the single finger, or an image of the fingerprint of the single finger, which is not limited here.
  • the target handprint image includes a structured light channel and an unstructured light channel
  • the handprint image to be processed can be determined by performing target segmentation on the unstructured light channel of the target handprint image.
  • the step of determining the handprint image to be processed according to the target handprint image may be performed during the execution of the acquisition step S3030, or during the execution of the posture judgment step S3020 or in the posture judgment step S3020 It is performed after and before the execution of the acquiring step S3030.
  • the to-be-processed handprint image of the processing object is determined according to the target handprint image corresponding to the processing object. It can be understood that different processing objects belonging to the same target object may correspond to the same target handprint image, or may correspond to different target
  • the handprint images may correspond to the same target image acquisition device, or may correspond to different target image acquisition devices. If at a certain moment, according to the heights of the first processing object and the second processing object belonging to the same target object, it is determined that the target image acquisition devices corresponding to the two are the same, then the handprint image of the same target collected by the target acquisition device can be Two to-be-processed handprint images containing the two processing objects are acquired respectively.
  • the first processing object and the second processing object correspond to different target image acquisition devices.
  • the target handprint image of the processing object is collected by the target image acquisition device, and the clearly imageable object surface subspace of the target image acquisition device matches the height of the processing object.
  • the height of the processed object is used to determine the target image capture device.
  • the height of the processing object may be determined according to a sensor such as a distance sensor, or may be determined according to a preview image of the handprint.
  • the image acquisition device of the handprint acquisition system has various setting schemes.
  • the image acquisition device of the handprint acquisition system only includes a preview image acquisition device, and the preview image acquisition device is not only used for preview, but also for acquisition of target handprint images.
  • the preview image acquisition device has a large enough clear imageable object plane space, which is sufficient for clear imaging of four fingers, double thumbs or palms.
  • the target image acquisition device is the preview image acquisition device, and the clearly imageable object surface subspace of the preview image acquisition device matches the height of the processing object.
  • the target handprint image can be a preview image of the handprint image that is judged to be qualified in the attitude of the processing object, or it can be an image re-taken after the attitude of the processing object is determined to be qualified (when the preview image is at a lower resolution This is usually done when a shot is taken, or when a reshoot is thought to allow for better focus).
  • the preset image acquisition device is used to capture target handprint images, that is to say, the target image capture device is selected from the preset image capture devices.
  • the clear imageable object surface subspaces of multiple preset image acquisition devices partially overlap. It is determined by the matching between the subspace of the object surface and the height of the processing object.
  • the clear imageable object surface subspaces of the two are [h1h2] and [h2h3] respectively (just take the single-point coincidence as an example, the two clear imageable object surface subspaces can of course be interval coincide).
  • the height of the processing object falls between h1-h2, use A as the target image acquisition device; when the height of the processing object falls between h2-h3, use B as the target image acquisition device.
  • the target handprint image can be taken almost simultaneously with the preview image
  • the target image acquisition device can be selected from the preset image acquisition devices, and the image collected by the target image acquisition device will become the target handprint image), or it can be determined that the attitude of the processing object is qualified, according to the preview of the handprint
  • the image is re-taken by the target image acquisition device.
  • the preview image acquisition device belongs to the preset image
  • the preview image acquisition device collects the preview handprint image for judging the hand posture
  • the preview image acquisition device and the preset image acquisition device are both used for the acquisition of the target handprint image, which image acquisition device is used specifically Acquisition as the target image acquisition device should be determined according to the matching between the clearly imageable object surface subspace of the image acquisition device and the height of the processing object.
  • there is preview image acquisition device C, preset image acquisition devices A and B, and the clear imageable object surface subspaces of the three are h0-h1, h1-h2 and h2-h3 respectively.
  • the target image capture device may be a preview image capture device, or a preset image capture device.
  • the target handprint image can be a preview handprint image (in this case, after the height of the processing object is determined according to the preview handprint image, the preview image acquisition device is selected as the target image acquisition device), or it can be It is shot almost simultaneously with the preview handprint image (in this case, when the preview image acquisition device collects images, the preset image acquisition device is also collecting images, after determining the height of the processing object according to the preview handprint image, the target selected according to the height
  • the image acquisition device is a preset image acquisition device rather than a preview image acquisition device, and the image collected by the target image acquisition device also becomes the target handprint image), it can also be determined after the attitude of the processing object is qualified, and the processing object is determined according to the preview handprint image and the image re-taken by the target image capture device after selecting the target image capture device according to the height (the target image capture device can be a preview image capture device or a preset image capture device at this time).
  • the target acquisition device corresponding to the processing object can meet the following requirements: the clearly imageable object surface subspace of the target acquisition device corresponding to the processing object is consistent with the height of the processing object. match. In this way, a clearly imaged processing object can be obtained in the target palmprint image.
  • the height of the target object is determined as a whole, and the height of the target object is used as the height of each processing object included in the target object. At this time, all processing objects can correspond to the same target image acquisition device.
  • This solution is suitable for posture judgment Handles determination of object height when the object is a full hand.
  • the height can be determined individually for each processing object. In the case that two processing objects have different heights, the two processing objects may correspond to different target image acquisition devices, that is, correspond to different target handprint images.
  • the target handprint image is collected by the target image collection device for the handprint collection area within a specified time interval from the preview handprint image collection time.
  • the designated time interval may be set to any appropriate duration as required.
  • the specified time interval may be [0, ⁇ t], where the magnitude of ⁇ t may be in the range of [1 100] ms, for example.
  • the collection time of the target handprint image should not be too long from the collection time of the preview handprint image, so as to avoid a large change in the posture of the human hand. Although the posture of the object is judged to be qualified according to the preview handprint image, the posture of the actual collection is unqualified. Case.
  • the target handprint image is collected within a specified time interval from the preview handprint image collection time, which can include the situation that the target handprint image and the preview handprint image are collected almost at the same time, and can also include the target handprint image collected after the preview handprint image is collected Condition.
  • the handprint image is the case of previewing the handprint image, and it can also be regarded as the case of collecting the target handprint image within a specified time interval from the collection time of the previewed handprint image.
  • Processing step S3040 processing the acquired handprint image to be processed.
  • processing the acquired handprint image to be processed may include one or more of the following: performing fingerprint recognition on the handprint image to be processed; transforming the handprint image to be processed into a simulated stamped fingerprint image.
  • the transformation of the to-be-processed handprint image into a simulated stamped handprint image can be realized by performing three-dimensional reconstruction and planar expansion of the to-be-processed handprint image, which will be described below.
  • the preview handprint image collected by the preview image collection device is used to judge the posture of the posture judgment object. After the posture is qualified, according to the processing object The handprint image of the object to be processed is obtained by matching the target handprint image captured by the target image acquisition device.
  • This solution guarantees fast posture judgment by previewing the image acquisition device, and determines the target image acquisition device that matches its height according to the height of the processing object, so that the handprint acquisition device in the embodiment of the present application can be used in a larger space and in a more relaxed manner. Handprints are clearly imaged under the limitation of hand posture, thereby reducing the requirements for user cooperation and improving user experience.
  • the target image acquisition device is selected from a plurality of image acquisition devices that can clearly image partly overlapping subspaces of the object surface, and the method may further include: determining the height of the processing object based on the previewed handprint image; In the acquisition device, select an image acquisition device that can clearly image the subspace of the object surface and match the height as the target image acquisition device corresponding to the processing object; the acquisition step S3030 may include: using the target image acquisition device to preview the handprint image at a distance from the handprint collection area Collect target handprint images in the specified time interval of collection time, and determine the handprint images to be processed according to the target handprint images; The target handprint image determines the to-be-processed handprint image; or, the to-be-processed handprint image is acquired; wherein, the to-be-processed handprint image is determined according to the previewed handprint image, and the previewed handprint image is the target handprint image.
  • the setting scheme of the image acquisition device of the handprint collection system includes scheme 2 and scheme 3 in step 3030 .
  • the handprint acquisition system may include a plurality of image acquisition devices that can clearly image partly overlapped subspaces of the object surface (that is, the above-mentioned preset image acquisition devices), and the target image acquisition device of the processing object may be selected from among them according to the height of the processing object. out.
  • the subspaces of the two image acquisition devices that can be clearly imaged are partially overlapped.
  • the height of the processing object can be determined based on the previewed handprint image (the height of each processing object can be calculated separately, or the height of the target object can be calculated as the height of the processing objects included in the target object).
  • the preview palmprint image may be collected under the illumination of a structured light source, and the structured light image corresponding to the preview palmprint image may be determined according to the preview palmprint image.
  • the structured light source is a monochromatic light source such as red light
  • the red channel in the preview palmprint image is a structured light channel, which can be used as a structured light image corresponding to the preview palmprint image.
  • the light composition image (for example, the structured light channel of the previewed handprint image) can perform three-dimensional reconstruction on the processing object in the previewed handprint image. Based on the 3D reconstruction result, the height of the processed object can be obtained (when the connecting line between the target object and the image acquisition device is basically in the vertical direction).
  • the heights of the processing objects included in the target object may be the same or different.
  • the heights of the processing objects included in the target object may be the same.
  • the heights of each processing object included in the target object may be calculated and the average value of some or all of the heights may be used as the final height of each processing object.
  • the respective heights of the four fingers can be calculated, and the heights corresponding to the largest and smallest heights can be selected and the average value of the two can be calculated as the height of the target object and also as the height of each processing object.
  • the processing objects included in the target object have the same height.
  • the height of the target object may be calculated and used as the height of the processing object.
  • the processing objects included in the target object have different heights.
  • the height of each processing object may be calculated separately as the height of each processing object.
  • the granularity of the posture judgment is the entire target object (that is, the posture judgment object is the target object, and the posture judgment result is whether the posture of the target object is qualified or unqualified)
  • the heights of the processing objects included in the target object can be calculated individually or as a whole, and the heights of the processing objects can be the same or different.
  • the target collection devices corresponding to the processing objects must be the same; when the processing objects have their own heights, the target collection devices corresponding to the processing objects may be different.
  • the granularity of posture judgment is a single processing object (that is, the posture judgment object is a single processing object, and the posture judgment result is that the posture of the processing object is qualified or unqualified)
  • the height is calculated, usually each The height of the processing object is calculated separately, because at this time it is not possible to determine that the attitude of all processing objects is qualified, and only the height of the processing object with a qualified attitude can be calculated.
  • the target acquisition device corresponding to the different processing objects included in the target object can be different.
  • the preset image acquisition device including the image acquisition device C1 and the image acquisition device C2 as an example.
  • the posture determination object is a target object including four fingers, and the height calculation is performed on the four fingers as a whole.
  • the preview handprint image collected by the preview image collection device C1 or C2 or other image collection device
  • determine the overall height of the four fingers according to the corresponding structured light image in the preview palmprint image for example, calculate the heights of the index finger, middle finger, ring finger, and little finger respectively and take the average as the height of each processing object (single finger)).
  • the target collection device C1 is selected according to the height of the processing object. Since each processing object corresponds to the same height, the target collection device corresponding to each processing object is C1.
  • the target handprint image is collected by the target image collection device C1, and finger segmentation is performed on the target handprint image to obtain the corresponding fingerprint of each processing object (single finger).
  • the handprint image to be processed is shown in Table 1.
  • the gesture determination object is a target object including four fingers, and the height calculation is performed on each finger in the target object.
  • the preview handprint image collected by the preview image collection device C1 or C2 or other image collection device
  • the heights of the four fingers are respectively determined according to the corresponding structured light images in the previewed palmprint image (for example, the heights of the index finger, middle finger, ring finger, and little finger are respectively calculated).
  • select the corresponding target acquisition devices for example, the target acquisition device corresponding to the index finger and little finger is C1, and the target acquisition device corresponding to the middle finger and ring finger is C2).
  • the handprint image is collected by the target image acquisition device C1, as the target handprint image of the index finger and the little finger, and the handprint image is collected by the target image acquisition device C2,
  • the target palmprint images of the middle finger and the ring finger are used, and finger segmentation is performed on the target palmprint images of each processing object to obtain the to-be-processed palmprint image corresponding to the processing object (single finger).
  • the processing objects and their corresponding target handprint images are shown in Table 2.
  • the posture judgment object may be each of the four fingers, and the height calculation is performed on each finger.
  • the preview image acquisition device for example, C1
  • another image acquisition device C2 is also acquiring images.
  • the posture of the processing object is qualified (for example, in the preview image collected by T1, the index finger is judged to be qualified, and other fingers are not qualified; in the preview image collected by T2, the middle finger and ring finger are judged to be qualified, and the little finger is not qualified. Qualified; in the preview image collected by T3, the little finger is judged to be qualified).
  • determine the heights of the four fingers respectively according to the corresponding structured light images in the qualified preview handprint image of the processing object for example, calculate the height of the index finger from the preview handprint image collected by T1; calculate the height of the index finger from the preview handprint image collected by T2; , calculate the height of the middle finger and ring finger; calculate the height of the little finger from the preview handprint image collected by T2).
  • select the target acquisition devices corresponding to them according to the height of the processing object for example, the target acquisition device corresponding to the index finger, ring finger and little finger is C1, and the target acquisition device corresponding to the middle finger is C2).
  • the preview handprint image taken at T1 time C1 is used as the target fingerprint image of the index finger
  • the handprint image taken at T2 time C2 is taken as the target fingerprint image of the middle finger
  • the preview palmprint image captured at T2 time C1 is used as the target palmprint image of the ring finger
  • the preview palmprint image captured at T3 time C1 is used as the target palmprint image of the little finger.
  • Table 3 The processing objects and their corresponding target handprint images are shown in Table 3.
  • the obtaining step S3030 may include the following steps.
  • step S3030 includes using the target image acquisition device to measure the distance between the current preview handprint image Within the specified time interval of the collection time, image collection is performed on the target object in the handprint collection area to obtain a target handprint image. Subsequently, the handprint image to be processed is determined according to the collected target handprint image.
  • the preview image acquisition device is C1
  • the image acquisition devices that can clearly image overlapping subspaces are C1 and C2
  • the target image acquisition device that selects the processing object according to the height of the processing object is C2, which is equivalent to using a different preview image acquisition device.
  • the new image acquisition device re-acquires the target handprint image.
  • the preview image acquisition device is C1, and the image acquisition devices that can clearly image subspace overlapping are C1 and C2, and the target image acquisition device that selects the processing object according to the height of the processing object is C1.
  • the preview image acquisition device and the target image acquisition Although the devices are the same, the resolution of the preview handprint image collected during preview may not be sufficient, so you can choose to re-acquire the target handprint image.
  • the preview image acquisition device is C1, and the image acquisition devices that can clearly image subspace overlapping are C2 and C3, and the target image acquisition device that selects the processing object according to the height of the processing object is C3.
  • the new image acquisition device of the acquisition device re-acquires the target handprint image.
  • step S3030 includes acquiring a target handprint image collected by the target image collection device at the same time as the preview fingerprint image in the handprint collection area, and determining the handprint image to be processed according to the target handprint image. That is to say, when the preview image acquisition device acquires the preview handprint image for the handprint acquisition area, other image acquisition devices are used to simultaneously acquire the target handprint image for the handprint acquisition area.
  • the preview image acquisition device is C1
  • the image acquisition devices that can clearly image overlapping subspaces are C1 and C2.
  • C1 collects the preview handprint image
  • C2 also uses C2 to collect the handprint image, and determines the processing object according to the preview handprint image.
  • the target image acquisition device is C2.
  • the handprint image collected by C2 can be used as the target handprint image when C1 collects and previews the handprint image.
  • the target handprint image has been collected previously, the handprint image to be processed has not been determined (for example, by segmenting the target handprint image), so it can be acquired In step S3030, the handprint image to be processed is determined according to the target handprint image.
  • step S3030 includes directly acquiring the handprint image to be processed, wherein the handprint image to be processed is determined according to the previewed handprint image.
  • the attitude of the processing object is judged to be qualified according to the preview palmprint image
  • the target palmprint image of the processing object is judged to be the preview palmprint image according to the height of the processing object, and the resolution of the preview palmprint image is sufficient, so there is no need to re-acquire a new one.
  • image as the target handprint image That is, the target handprint image is the preview handprint image. Since the preview handprint object has been segmented for gesture judgment during posture judgment, the single-finger image segmented during posture judgment can be directly obtained as the processing object. Processing handprint images.
  • the handprint images to be processed can be obtained in different ways to meet the needs of different application scenarios. Therefore, this solution has high flexibility and can balance image quality and time delay, as well as the need for user cooperation. Require.
  • the preview image acquisition device is an image acquisition device with the shortest focal length in the image acquisition device set composed of the preview image acquisition device and a plurality of image acquisition devices.
  • the image acquisition device with the largest field of view/depth of field is selected as the preview image acquisition device.
  • the set of image capture devices is a plurality of image capture devices.
  • the set of image acquisition devices may include image acquisition devices C1 and C2.
  • the set of image capture devices includes the preview image capture device and the plurality of image capture devices.
  • the preview image capture device is image capture device C3, and the target image capture devices are image capture devices C1 and C2, then the set of image capture devices may include image capture devices C1, C2 and C3.
  • the lenses of the image capture devices in the image capture device set may be set at the same height, but their corresponding focal lengths are different.
  • the focal lengths of the two image capture devices C1 and C2 may be 6 millimeters (mm) and 8 mm, respectively.
  • the preview image acquisition device may be an image acquisition device C1 with a focal length of 6 mm.
  • the target handprint image is collected under the illumination of a structured light source and an unstructured light source
  • the handprint image to be processed includes a structured light channel and an unstructured light channel
  • the processing steps may include: for each processing object, according to The structured light channel of the handprint image to be processed corresponding to the processing object determines the three-dimensional information of the processing object, and performs expansion transformation on the unstructured light channel of the handprint image to be processed corresponding to the processing object according to the three-dimensional information, to obtain the corresponding The expanded image; according to the expanded image, the simulated imprint image corresponding to the processing object is obtained.
  • the target handprint image can be collected simultaneously under the illumination of a structured light source and an unstructured light source (such as a visible light source), and at this time, the target handprint image corresponds to the structured light image and the unstructured light image.
  • a structured light source and an unstructured light source such as a visible light source
  • the target handprint image can have a structured light channel and an unstructured light channel (which can be respectively used as a structured light image and an unstructured light image corresponding to the target handprint image), and the corresponding handprint image to be processed also It can include structured light channels and unstructured light channels.
  • three-dimensional reconstruction can be performed on the processing object to determine the three-dimensional information of the processing object.
  • the three-dimensional information may be the three-dimensional world coordinates of each point on the processing object in the world coordinate system.
  • the conversion relationship between the world coordinate system and the image coordinate system can be determined based on each strip under the structured light channel of the handprint image to be processed, and based on the conversion relationship
  • the two-dimensional image coordinates of each point on the processing object are converted into three-dimensional world coordinates to obtain the three-dimensional information of the processing object.
  • the unstructured light channel of the to-be-processed handprint image corresponding to the processing object can be expanded and transformed to obtain the expanded image corresponding to the processed object. Further processing (such as image enhancement) on the expanded image can obtain the corresponding simulated imprinted image.
  • the three-dimensional information of the processing object is obtained through the structured light channel of the handprint image to be processed corresponding to the processing object, and the unstructured light channel of the handprint image to be processed corresponding to the processing object is expanded and transformed based on the three-dimensional information, According to the expanded image, the corresponding simulated imprinted image is obtained.
  • the non-contact collected simulated imprinting image obtained by the method is more in line with the handprint collected by the user's imprinted, and higher comparison accuracy can be obtained when the simulated imprinted image and the imprinted fingerprint image are used for fingerprint comparison.
  • determining whether the gesture of the gesture judging object is qualified based on the previewed palmprint image may include: determining a first gesture index of the processing object contained in the gesture judging object based on the previewed palmprint image, where the first posture index is selected from the posture index.
  • the first posture index may represent an index that needs to be judged separately for each processing object (that is, judgment is performed at a granularity of a single processing object), and it is selected from posture indexes.
  • the posture judgment object is a processing object (such as a single finger), and all posture indicators of each processing object can be judged separately.
  • the attitude judgment object is a target object (such as four fingers as a whole)
  • the pitch angle of each finger can be judged separately
  • the roll angle can be judged by four fingers as a whole.
  • the first attitude index includes the pitch angle and does not include the roll angle .
  • Whether the posture of the posture judging object is qualified can be determined by judging whether the first posture index of the processing object included in the posture judging object is qualified.
  • the attitude judgment object is the target object
  • some attitude indexes can be calculated as a whole. Qualified; some attitude indicators can be calculated by a single processing object, and the attitude of the target object is determined to be qualified only when the attitude indicators of all the processing objects are qualified.
  • the attitude indicator of a single processing object may be used as a representative of the overall attitude indicator of the attitude judgment object.
  • determining the first gesture index of the processing object included in the gesture judgment object based on the previewed handprint image may include at least one of the following: repeating the first structured light corresponding to the processing object in the structured light channel of the previewed handprint image
  • the unit and the second structured light repeating unit determine the pitch angle or roll angle of the object to be processed
  • the preview handprint image is collected under the illumination of a structured light source, and the preview handprint image can include a structured light channel; according to the preview handprint image structure
  • the height of the two strip points of the third structured light repeating unit corresponding to the processing object in the optical channel determines the roll angle or pitch angle of the processing object
  • the preview handprint image is collected under the illumination of a structured light source
  • the preview handprint image includes a structured light channel; according to the position and/or shape of the processing object in the unstructured light channel of the preview handprint image, determine the position and/or yaw angle of the processing object;
  • the preview palmprint image is illuminated by an unstructured light source
  • the preview handprint image may be collected under the illumination of a structured light source, so the preview handprint image may include a structured light channel.
  • the structured light source can be strip light
  • Each of the repeating unit, the fifth structured light repeating unit and the sixth structured light repeating unit may represent any strip in the strip light.
  • the first structured light repeating unit and the second structured light repeating unit are arranged along a direction perpendicular to the axis of any processing object (such as a single finger), they can be used to determine the pitch angle of the processing object.
  • the height difference between the first structured light repeating unit and the second structured light repeating unit corresponding to the processing object may be determined, and the height difference is used to represent the elevation angle of the processing object.
  • the height difference is smaller than the first height threshold, it may be determined that the elevation angle of the processing object is qualified; otherwise, it is determined that the elevation angle of the processing object is unqualified.
  • the first height threshold can be set to any suitable value as required.
  • FIG. 31A shows a schematic diagram of a structured light channel image according to an embodiment of the present application. It should be noted that the structured light channel image shown in Figure 31A is generated by irradiating structured light to the processing object (single finger). Both band and scatter repeat units are formed.
  • the structured light repeating unit described herein mainly refers to each strip in the strip structured light. Scatter points formed within the stripes such as that in FIG. 31A can be ignored. Referring to Fig. 31A, it shows the first structured light repeating unit and the second structured light repeating unit, assuming that the height difference between them is H 1 , if the height difference H 1 is smaller than the first height threshold, the processing object can be determined The pitch angle of the processing object is determined to be qualified, otherwise it is determined that the pitch angle of the processing object is unqualified. If the first structured light repeating unit and the second structured light repeating unit extend along a direction parallel to the axis of any processing object (for example, a single finger), they can be used to determine the roll angle of the processing object.
  • the height difference between the first structured light repeating unit and the second structured light repeating unit corresponding to the processing object may be determined, and the height difference is used to represent the roll angle of the processing object, as shown in FIG. 31B .
  • Fig. 31B shows a schematic diagram of arrangement of structured light repeating units in structured light according to another embodiment of the present application. It can be understood that FIG. 31B only shows the first structured light repeating unit and the second structured light repeating unit arranged in a direction parallel to the axis of the object to be treated, and the fingers are not shown. But the finger corresponding to Figure 31B can be understood as the swing of the finger shown in Figure 31A The placement positions are consistent, so that the structured light repeating units shown in FIG.
  • the height difference between the first structured light repeating unit and the second structured light repeating unit can be calculated, if the height difference is less than the second height threshold, it can be determined that the roll angle of the processing object is qualified, otherwise it can be determined that the processing object roll angle failed.
  • the second height threshold can be set to any suitable value as required.
  • the height of the strip can be expressed by the height of a certain point on the strip or the average height of several points, for example, by the height of the midpoint of the strip.
  • the height of the two stripe points of the third structured light repeating unit can be determined. Handles the roll angle of the object. If the third structured light repeating unit extends along a direction parallel to the axis of any processing object (such as a single finger), the pitch angle of the processing object can be determined according to the heights of the two stripe points of the third structured light repeating unit. Exemplarily, when the third structured light repeating unit is a stripe, two farther apart stripe points can be selected on the third structured light repeating unit.
  • Fig. 32A shows a schematic diagram of a structured light image according to another embodiment of the present application. As shown in Fig. 32A, by calculating the distance between the first strip point a and the second strip point b on the third structured light repeating unit height difference, if the height difference is less than the third height preset threshold, it can be determined that the roll angle of the finger is qualified; otherwise, it is determined that the roll angle of the processing object is unqualified.
  • the third height threshold can be set to any suitable size as required, for example, 4mm.
  • the height difference between the two stripe points on the third structured light repeating unit can be calculated, using the height The difference represents the pitch angle of the processing object, as shown in Fig. 32B.
  • Fig. 32B shows a schematic diagram of arrangement of structured light repeating units in structured light according to another embodiment of the present application. It can be understood that FIG. 32B only shows the third structured light repeating unit arranged in a direction parallel to the axis of the treatment object, and the fingers are not shown. However, the finger corresponding to FIG.
  • the structured light repeating units shown in FIG. 32B can be arranged in a direction parallel to the axis of the finger.
  • the fourth height threshold can be set to any suitable size as required, for example, 4mm.
  • the position and/or yaw angle of the processing object can be determined.
  • the unstructured light channel image may be segmented to determine a mask or envelope of each processing object.
  • the image segmentation of the unstructured light channel image can be implemented by using any suitable existing or future image segmentation network model.
  • the image segmentation network model may include one or more of the following: fully convolutional network (Fully Convolutional Networks, FCN), U-shaped network (Unet), depth experiment Room (DeepLab) series, V-shaped network (Vnet) and other network models.
  • the target object in the current unstructured light channel image includes two processing objects, such as the user's double thumbs.
  • the mask or envelope of the two thumbs in the unstructured light channel image can be obtained.
  • the positions and/or shapes of the two thumbs in the unstructured light channel image can be determined, and then whether the position and/or yaw angle of the processing object is qualified can be judged.
  • the position of the mask and/envelope of any processing object can be used to represent the position of the processing object, and the mask and/envelope shape of any processing object (such as the left thumb) can be used Indicates the shape of the processing object.
  • Fig. 33 shows a schematic diagram of processing objects in an unstructured light channel image according to an embodiment of the present application. In an example, it may be determined whether the position of any processing object (that is, the position of its mask and/or envelope) is in the central area of the current unstructured light channel image. The size of the central area can be set as required, and it at least includes the central point of the unstructured light channel image.
  • any object to be processed (for example, the left thumb) is in the central area of the unstructured light channel image, its position can be determined to be qualified; otherwise, its position can be determined to be unqualified.
  • point r is the center point of the unstructured light channel image. According to the position of the envelope of the processing object, it can be judged that the position of the processing object is not in the central area of the current unstructured light channel image, so its position is unqualified.
  • different position determination criteria may be set for different fingers.
  • the target object is four fingers
  • the first height position, the second height position, the first width position and the second width position can all be set as required.
  • the first height position may be 0.1H from the upper edge of the unstructured light channel image
  • the second height position may be 0.3H from the lower edge of the unstructured light channel image
  • the first width position may be a distance from At 0.1W from the left edge of the unstructured light channel image
  • the second width position can be 0.1W from the right edge of the unstructured light channel image
  • H is the entire height of the unstructured light channel
  • W is the entire width of the unstructured light channel .
  • the shape of any object to be processed (that is, the shape of its mask and/or envelope) in the current unstructured light channel image is tilted relative to the axis of the unstructured light channel image.
  • the axis of the unstructured light channel image may be a center line of the unstructured light channel image extending along a preset direction (for example, the height direction of the image).
  • the inclination angle between the shape of any processing object and the axis of the current unstructured light channel image can be determined, if the inclination angle is less than the preset angle threshold, it can be determined that the yaw angle of the processing object is qualified, otherwise it can be determined the place The yaw angle of the processing object is unqualified.
  • the preset angle threshold can be set to any suitable value as required.
  • Fig. 34 shows a schematic diagram of an unstructured light channel image according to an embodiment of the present application. As shown in Fig. 34, the dotted line M1N1 indicates the axis of the unstructured light channel image, and the dotted line M2N2 indicates the axis of the processing object.
  • the inclination angle between the axis of the processing object and the axis of the current unstructured light channel image is angle ⁇ . If the inclination angle ⁇ is smaller than the preset angle threshold, it may be determined that the yaw angle of the processing object is qualified, otherwise it is determined that the yaw angle of the processing object is unqualified.
  • the pitch angle and/or roll angle of the processing object is determined through the structured light repeating unit under the structured light channel; the position of the object to be processed is determined through the position and/or shape of the processing object under the unstructured light channel And/or yaw angle, this method can adopt different judgment methods according to different attitude judgment requirements, so as to ensure the accuracy of judgment results.
  • the gesture judging object is a target object including multiple processing objects
  • determining whether the posture of the gesture judging object is qualified based on the previewed handprint image may include: The height of the four structured light repeating units determines the roll angle of the target object; or, according to the position and/or shape of the target object in the unstructured light channel of the preview handprint image, determines the position and/or yaw angle of the target object; preview The handprint image is collected under the illumination of an unstructured light source, and the preview of the handprint image includes an unstructured light channel.
  • the gesture judgment object may be a target object including a plurality of processing objects.
  • other attitude indicators in the attitude indicators except the first attitude indicator may be used for the overall calculation.
  • the position and roll angle of the target object are calculated as a whole, instead of the attitude index of each processing object being calculated separately. In this way, calculation speed can be increased.
  • the posture judgment object may include four fingers, that is, index finger, middle finger, ring finger, and little finger. According to the heights of the fourth structured light repeating units corresponding to any two fingers (such as the index finger and the little finger) in the structured light channel of the previewed handprint image, the overall roll angle of the four fingers can be determined.
  • the fourth structured light repeating unit corresponding to the two processing objects is the same structured light repeating unit (for example, the same strip) in the structured light.
  • the fourth structured light repeating unit may extend in a direction perpendicular to the axis of the treatment object.
  • the height difference between the heights of the fourth structured light repeating units corresponding to the two processing objects it can be judged whether the overall roll angle of the target object is qualified.
  • the fifth height threshold can be set to any suitable size as required, for example, 5mm.
  • determining the position and/or yaw angle of the processing object according to the position and/or shape of the processing object in the unstructured light channel of the previewed handprint image in the foregoing embodiments.
  • the implementation of "determining the position and/or yaw angle of the target object according to the position and/or shape of the target object in the unstructured light channel of the preview handprint image” is similar, and for the sake of brevity, it will not be repeated here repeat.
  • the structured light channel and unstructured light can be The image under the channel to determine the overall roll angle, position and/or shape of the object of interest. This method can be used to judge whether the posture of the target object is qualified as a whole, and the efficiency is high and the accuracy can be guaranteed.
  • the preview handprint image is collected when the structured light source is irradiated, and the roll angle and/or pitch angle of the attitude judgment object is selected according to the number of structured light units included in the structured light channel of the preview handprint image.
  • the fifth structured light repeating unit is determined, the three-dimensional information of the processing object is determined according to a plurality of sixth structured light repeating units selected from the structured light channels included in the structured light channel of the handprint image to be processed, and the fifth structured light
  • the density of the repeat unit is less than the density of the sixth structured light repeat unit.
  • structured light is used to irradiate the handprint collection area to obtain a preview image of the handprint.
  • the roll angle and/or pitch angle of the attitude judgment object can be determined through the method described in the above embodiment through the plurality of fifth structured light repeating units contained therein.
  • at least part of the sixth structured light repeating unit may be selected from the structured light units included in the structured light channel of the handprint image to be processed for 3D reconstruction to determine the 3D information of the processing object. Wherein the density of the sixth structured light repeating unit is greater than the density of the fifth structured light repeating unit.
  • the distribution of structured light repeating units for pose determination is sparser, while the distribution of structured light repeating units for 3D reconstruction is denser.
  • the structured light repeating unit is a strip
  • all the strips may be used for 3D reconstruction, and a part of the strips may be sampled to determine the roll angle and/or pitch angle of the attitude judgment object.
  • the roll angle and/or pitch angle of the attitude judgment object is determined by the fifth structured light repeating unit with low density, and this solution has high calculation efficiency.
  • the 3D information of the processing object is determined through the sixth structured light repeating unit with high density, which can ensure the accuracy of the determined 3D information.
  • determining whether the posture of the gesture judging object is qualified based on the previewed handprint image may include: judging whether the position and/or yaw angle of the posture judging object are qualified; if the position and/or yaw angle of the posture judging object are qualified, then Judging whether the roll angle and/or pitch angle of the attitude judgment object is qualified; if the roll angle and/or pitch angle of the attitude judgment object are qualified, the attitude of the attitude judgment object is qualified.
  • the judgment order of each index can be determined based on the computing power consumption calculated by each index, and the index with less computing power consumption is judged first. For example, to determine whether the posture of the posture judging object is qualified based on the previewed handprint image, it may first be judged whether the position and/or yaw angle of the posture judging object is qualified. If the position and/or the yaw angle of the attitude judgment object are qualified, then it is judged whether the roll angle and/or the pitch angle of the attitude judgment object are qualified. If the position and/or yaw angle of the attitude judging object are unqualified, it can be determined that the attitude of the attitude judging object is unqualified.
  • the judgment can be optionally stopped, and the preview image collection device can optionally be used to re-acquire the preview handprint image.
  • the roll angle and/or the pitch angle of the attitude judgment object are qualified, it may be determined that the attitude of the attitude judgment object is qualified.
  • the preview image acquisition device can optionally be used Reacquire the preview image of the handprint.
  • the algorithm for judging the position and/or yaw angle of the attitude judging object is generally less expensive than the algorithm for judging the roll angle and/or pitch angle, it can be judged sequentially according to the sequence of this scheme, which can reduce Waste of resources, improve efficiency.
  • the posture judgment object is the processing object.
  • the method may further include: judging whether the number of handprint images to be processed corresponding to each processing object included in the target object reaches a preset number threshold, and if so, Then go to the processing step, otherwise, return to at least obtain step S3030; determine the three-dimensional information of the processing object according to the structured light channel of the to-be-processed handprint image corresponding to the processing object, which may include: according to the selected The structured light channel of the handprint image to be processed determines the three-dimensional information of the processing object, and the selected handprint image to be processed is the handprint image to be processed that meets the quality requirements among the preset number threshold handprint images to be processed corresponding to the processing object .
  • the target object may include at least one processing object, and for each processing object, it is necessary to acquire enough images to be processed (for example, a preset threshold number), so as to select the processing object therefrom in step S3040
  • a preset number threshold of handprint images to be processed may be set in advance.
  • the preset number threshold may be any integer greater than 0, such as 5, 6, 8 and so on. Exemplarily, the preset number threshold may be equal to five.
  • the target object includes two thumbs, for each object to be processed (each thumb), it may be determined whether the number of handprint images corresponding to the object to be processed reaches the preset number threshold 5.
  • the to-be-processed handprint image corresponding to each processing object is an image collected when the processing object is in a qualified posture.
  • the left thumb if there are currently less than 5 corresponding handprint images to be processed, you can continue to execute the acquisition step S3030 described in the previous embodiment or execute the acquisition step S3010 to the acquisition step S3030 described in the previous embodiment until the left thumb corresponds to the handprint image to be processed.
  • the number of processed handprint images reaches 5.
  • a gesture judgment determines that the posture of the processing object is qualified, and within a short period of time after the judgment is qualified, a plurality of target handprint images corresponding to the processing object have been collected and the handprint to be processed is determined.
  • the posture can also be re-judged, that is, the entire process of collecting step S3010-posture judging step S3020-obtaining step S3030 can be re-executed to obtain a new handprint image to be processed.
  • the number of to-be-processed handprint images corresponding to the thumb reaches 5. If the number of handprint images to be processed corresponding to the left thumb and the right thumb has reached 5, the processing step S3040 can be started.
  • the number of handprint images to be processed reaches the preset number threshold.
  • the to-be-processed images corresponding to the processing object reach the preset number threshold as soon as possible, so the efficiency is relatively high.
  • the handprint images to be processed that meet the quality requirements are selected from the preset threshold number of handprint images to be processed corresponding to the processing object, so as to determine the three-dimensional information of the processing object therefrom.
  • the method can ensure that the image quality of the handprint image to be processed for determining the three-dimensional information meets the quality requirements, so as to improve the accuracy of the determined three-dimensional information.
  • the posture judgment object is a target object
  • the method may further include: judging whether the number of target handprint images corresponding to the target object reaches a preset number threshold, if so, go to the processing step, otherwise , return to perform at least the acquisition step; determining the three-dimensional information of the processing object according to the structured light channel of the handprint image to be processed corresponding to the processing object may include: according to the structured light of the selected handprint image to be processed corresponding to the processing object The channel determines the three-dimensional information of the object to be processed, and the selected handprint image to be processed is the handprint image to be processed that meets the quality requirements among the preset number of threshold handprint images to be processed corresponding to the object to be processed.
  • the posture judgment object is the target object.
  • the target handprint images of each processing object included in the target object must reach the preset number threshold at the same time, because the postures of each processing object are always qualified at the same time. It can be understood that, even if the poses of the processing objects are always qualified at the same time, the target image acquisition devices corresponding to the processing objects may be different. Assuming that the preset number threshold is 3, and the current index finger, middle finger, ring finger and little finger have obtained 3 target fingerprint images (collected by C1, C1, C2, and C2 cameras respectively), it is necessary to perform each The processing object continues to collect 2 target fingerprint images.
  • the acquisition step S3030 When continuing to collect, you can continue to execute the acquisition step S3030 described in the previous embodiment or execute the acquisition step S3010 to the acquisition step S3030 described in the previous embodiment until the number of target handprint images corresponding to the target object reaches 3.
  • a posture judgment determines that the posture of the target object is qualified, and multiple target handprint images corresponding to the target object have been collected within a short period of time after the judgment is qualified, only the acquisition step S3030 can be performed .
  • the posture can also be re-judged, that is, the entire process of collecting step S3010-posture judging step S3020-obtaining step S3030 can be re-executed to obtain a new target handprint image.
  • the number of target handprint images reaches the preset number threshold for the whole target object. If the posture of one processing object is unqualified and the postures of other processing objects are qualified, the overall posture of the target object is unqualified, and no target fingerprint image of any processing object is obtained at this time) and the acquired images are required to reach the preset number threshold, so the target The posture requirements of the object are relatively high, and the quality of the obtained target handprint image is relatively high, which is more helpful for subsequent operations such as handprint recognition.
  • the handprint images to be processed that meet the quality requirements are selected from the preset threshold number of handprint images to be processed corresponding to the processing object, so as to determine the three-dimensional information of the processing object therefrom.
  • the method can ensure that the image quality of the handprint image to be processed for determining the three-dimensional information meets the quality requirements, so as to improve the accuracy of the determined three-dimensional information.
  • FIG. 35 shows a schematic block diagram of a non-contact handprint collection device 3500 according to an embodiment of the present application.
  • Computer program instructions when the computer program instructions are run by the processor 3510, are used to execute the above-mentioned image stitching method 400, the non-contact target object handprint collection method 3000 or the handprint collection method realized by the handprint collection system 200 (that is, the above-mentioned A scheme for acquiring images when the blue light source group emits light with a blue emission scheme, and the green light source group emits light with a green emission scheme).
  • a storage medium is also provided, on which program instructions are stored, and when the program instructions are executed by a computer or a processor, they are used to perform the above image stitching method 400, non-contact target object hand
  • the collection method 3000 of fingerprints or the corresponding steps of the handprint collection method realized by the handprint collection system 200 that is, the above-mentioned scheme of collecting images when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme.
  • the storage medium may include, for example, a memory card of a smart phone, a storage unit of a tablet computer, a hard disk of a personal computer, a read only memory (ROM), an erasable programmable read only memory (EPROM), a portable compact disk read only memory (CD), etc. -ROM), USB memory, or any combination of the above storage media.
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD compact disk read only memory
  • USB memory or any combination of the above storage media.
  • a computer program product includes a computer program, and the computer program is used to execute the above image stitching method 400, the non-contact target object handprint collection method 3000 or The handprint collection method implemented by the handprint collection system 200 (that is, the above-mentioned scheme for collecting images when the blue light source group emits light in a blue light-emitting scheme, and the green light source group emits light in a green light-emitting scheme).
  • a computer program is also provided, and the computer program is used to execute the above-mentioned image stitching method 400, the non-contact target object handprint collection method 3000, or use the handprint collection system 200 to implement
  • the handprint collection method that is, the above scheme of collecting images when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme.

Abstract

Embodiments of the present application provide a handprint acquisition system. The handprint acquisition system comprises: a lighting system, which comprises one or more blue light sources and one or more green light sources, wherein each light source is used for emitting light to a handprint acquisition area, and the handprint acquisition area is used for placing a photographed object; a structured light projector, which is used for emitting structured light to the handprint acquisition area; one or more image acquisition devices, which are used for acquiring an image of the handprint acquisition area at the same time when the lighting system and the structured light projector emit light to the handprint acquisition area, so as to obtain a handprint image of the photographed object; a processing device, which is used for controlling the lighting system and the structured light projector to emit light and controlling the one or more image acquisition devices to collect the handprint image, and is further used for processing the handprint image acquired by the one or more image acquisition devices so as to obtain a structured light image and an illumination light image respectively corresponding to the image acquisition devices; and performing fusion and structured light image-based transformation on the illumination light image so as to obtain a processed handprint image.

Description

手纹采集系统Handprint collection system 技术领域technical field
本申请涉及非接触式手纹采集的技术领域,具体地,涉及一种手纹采集系统。The present application relates to the technical field of non-contact handprint collection, and in particular, to a handprint collection system.
背景技术Background technique
手纹识别技术是众多生物特征识别技术(biometrics)中的一种。所谓生物特征识别技术,系指利用人体所固有的生理特征或行为特征来进行个人身份鉴定的技术。由于生物特征识别所具有的便捷与安全等优点,使得生物特征识别技术在身份认证识别和网络安全等领域拥有广阔的应用前景。可用的生物特征识别技术有手纹(例如指纹等)、人脸、声纹、虹膜等,手纹是其中应用最为广泛的一种。Handprint recognition technology is one of many biometrics. The so-called biometric identification technology refers to the technology that uses the inherent physiological or behavioral characteristics of the human body to carry out personal identification. Due to the advantages of convenience and security of biometric identification, biometric identification technology has broad application prospects in the fields of identity authentication and network security. Available biometric identification technologies include handprint (such as fingerprints, etc.), face, voiceprint, iris, etc., among which handprint is the most widely used one.
手纹识别技术首先需要进行手纹的采集。在现有的手纹采集技术中,对不同人群的兼顾性不够好。例如,对于过干、过湿和手纹浅等多种不同情况下的手纹的采集效果有限,难以做到兼顾多种人群的采集需求。Handprint recognition technology first needs to carry out the collection of handprint. In the existing handprint collection technology, the compatibility of different groups of people is not good enough. For example, the effect of collecting handprints under various conditions such as over-dry, over-wet and shallow handprints is limited, and it is difficult to meet the collection needs of various groups of people.
发明内容Contents of the invention
为了至少部分地解决现有技术中存在的问题,根据本申请的一个方面,提供了一种手纹采集系统,包括:照明系统,照明系统包括一个或多个蓝色光源和一个或多个绿色光源,每个光源用于朝向手纹采集区发光,手纹采集区用于放置拍摄对象,拍摄对象包括单指、双拇指、四指、平掌、侧掌中至少一者;结构光投射器,结构光投射器用于向手纹采集区发射结构光;一个或多个图像采集装置,一个或多个图像采集装置用于在照明系统和结构光投射器向手纹采集区发光的同时采集手纹采集区的图像以获得拍摄对象的手纹图像;处理装置,处理装置用于控制照明系统和结构光投射器发光并控制一个或多个图像采集装置采集手纹图像;处理装置还用于对一个或多个图像采集装置采集的手纹图像进行处理,得到一个或多个图像采集装置各自对应的结构光图像和照明光图像;对照明光图像进行融合处理和基于结构光图像的变换处理,得到处理后的手纹图像;其中,照明光图像包括蓝光图像和绿光图像,用于进行融合处理的图像包括至少一个蓝光图像和至少一个绿光图像,用于进行融合处理的图像所对应的图像采集装置相同、拍摄对象相同、拍摄时间间隔在间隔范围内、拍摄的照明条件不同,照明条件不同包括照明系统的光源颜色和光照角度中至少一者不同。In order to at least partly solve the problems existing in the prior art, according to one aspect of the present application, a handprint collection system is provided, including: a lighting system, the lighting system includes one or more blue light sources and one or more green A light source, each light source is used to emit light toward the handprint collection area, and the handprint collection area is used to place the object to be photographed, and the object to be photographed includes at least one of single finger, double thumb, four fingers, flat palm, and side palm; a structured light projector, The structured light projector is used to emit structured light to the handprint collection area; one or more image acquisition devices, and one or more image acquisition devices are used to collect handprints while the lighting system and the structured light projector emit light to the handprint collection area The image of the collection area is obtained to obtain the handprint image of the subject; the processing device is used to control the lighting system and the structured light projector to emit light and control one or more image acquisition devices to collect the handprint image; the processing device is also used to process a Process the handprint images collected by one or more image acquisition devices to obtain the structured light images and illumination light images corresponding to one or more image acquisition devices; perform fusion processing on the illumination light images and transform processing based on the structured light images to obtain The processed handprint image; wherein, the illumination light image includes a blue light image and a green light image, and the image used for fusion processing includes at least one blue light image and at least one green light image, and the image corresponding to the image used for fusion processing The acquisition devices are the same, the shooting objects are the same, the shooting time interval is within the interval range, and the shooting lighting conditions are different, and the lighting conditions include at least one of the light source color and the lighting angle of the lighting system being different.
在本申请的实施例中的手纹采集系统,采集手纹图像时采用蓝色光源和绿色光源一起为图像采集装置补光。这可以扩大手纹采集系统对不同皮 肤状态的适用性,从而有利于兼顾多种人群的采集需求。In the handprint collection system in the embodiment of the present application, the blue light source and the green light source are used to supplement light for the image collection device when collecting the handprint image. This can expand the handprint acquisition system to different skin The applicability of the skin condition, which is conducive to taking into account the collection needs of various groups of people.
附图说明Description of drawings
本申请的下列附图在此作为本申请的一部分用于理解本申请。附图中示出了本申请的实施方式及其描述,用来解释本申请的原理。在附图中,The following drawings of the application are hereby considered as part of the application for understanding the application. The accompanying drawings show the embodiments of the present application and their descriptions to explain the principles of the present application. In the attached picture,
图1a示出根据本申请一个实施例的结构光投射器的示意图;Figure 1a shows a schematic diagram of a structured light projector according to an embodiment of the present application;
图1b-1d示出根据本申请一个实施例的结构光图案的示意图;Figures 1b-1d show schematic diagrams of structured light patterns according to an embodiment of the present application;
图2示出根据本申请一个实施例的目标采集系统及相关的目标物体和目标采集区的示意图;2 shows a schematic diagram of a target acquisition system and related target objects and target acquisition areas according to an embodiment of the present application;
图3示出了现有技术中将多个指纹图像直接叠加拼接在一起所获得的拼接图像的示意图;FIG. 3 shows a schematic diagram of a mosaic image obtained by directly superimposing and stitching multiple fingerprint images together in the prior art;
图4示出根据本申请一个实施例的图像拼接方法的示意性流程图;FIG. 4 shows a schematic flowchart of an image stitching method according to an embodiment of the present application;
图5示出根据本申请一个实施例的重建结构光图案的示意图;Fig. 5 shows a schematic diagram of reconstructing a structured light pattern according to an embodiment of the present application;
图6示出根据本申请一个实施例的包括拼接结构光图案在内的结构光图案的示例;FIG. 6 shows an example of a structured light pattern including a stitched structured light pattern according to an embodiment of the present application;
图7a-7c示出三个不同角度下的指纹采集所对应的三个结构光图像的示意图;7a-7c show schematic diagrams of three structured light images corresponding to fingerprint acquisition at three different angles;
图8示出根据本申请一个实施例的去除指甲之后的结构光图像的示意图;Fig. 8 shows a schematic diagram of a structured light image after nail removal according to an embodiment of the present application;
图9示出根据本申请一个实施例的通过相机采集结构光条纹的简单示意图;Fig. 9 shows a simple schematic diagram of collecting structured light stripes by a camera according to an embodiment of the present application;
图10示出根据本申请一个实施例的可视化三维点云模型示意图;Fig. 10 shows a schematic diagram of a visualized three-dimensional point cloud model according to an embodiment of the present application;
图11a-11b示出根据本申请一个实施例的模型沿公共坐标轴展开之前的去重操作的示意图;Figures 11a-11b show a schematic diagram of the deduplication operation before the model is expanded along the common coordinate axis according to an embodiment of the present application;
图12示出根据本申请一个实施例的三维模型沿公共坐标轴展开的示意图;Fig. 12 shows a schematic diagram of a three-dimensional model unfolded along a common coordinate axis according to an embodiment of the present application;
图13示出对两个图像进行拼接的示意图;Fig. 13 shows a schematic diagram of stitching two images;
图14示出根据本申请一个实施例的手纹采集系统及相关的手指和手纹采集区的示意图;Fig. 14 shows a schematic diagram of a handprint collection system and related fingers and handprint collection areas according to an embodiment of the present application;
图15示出根据本申请一个实施例的手纹采集系统的一部分的示意图;Fig. 15 shows a schematic diagram of a part of the handprint collection system according to one embodiment of the present application;
图16示出根据本申请一个实施例的手纹采集系统的部分结构及相关的手指和手纹采集区的示意图;Fig. 16 shows a schematic diagram of a partial structure of a handprint collection system and related fingers and handprint collection areas according to an embodiment of the present application;
图17示出根据本申请一个实施例的图像采集和光源发光时序的示意图;Fig. 17 shows a schematic diagram of image acquisition and light source lighting sequence according to an embodiment of the present application;
图18为根据本申请的一个示例性实施例的非接触式手纹采集装置的立体图;Fig. 18 is a perspective view of a non-contact handprint collection device according to an exemplary embodiment of the present application;
图19为根据本申请的一个示例性实施例的非接触式手纹采集装置的主视图; Fig. 19 is a front view of a non-contact handprint collection device according to an exemplary embodiment of the present application;
图20为根据本申请的一个示例性实施例的非接触式手纹采集装置的俯视图;Fig. 20 is a top view of a non-contact handprint collection device according to an exemplary embodiment of the present application;
图21为根据本申请的一个示例性实施例的非接触式手纹采集装置的侧视图;Fig. 21 is a side view of a non-contact handprint collection device according to an exemplary embodiment of the present application;
图22为根据本申请的一个示例性实施例的非接触式手纹采集设备的爆炸图;Fig. 22 is an exploded view of a non-contact handprint collection device according to an exemplary embodiment of the present application;
图23为图22中示出的非接触式手纹采集设备的轴测图;Figure 23 is an axonometric view of the non-contact handprint collection device shown in Figure 22;
图24为图23中示出的非接触式手纹采集设备的剖视图;Fig. 24 is a cross-sectional view of the non-contact handprint collection device shown in Fig. 23;
图25为图22中示出的非接触式手纹采集设备的手纹采集部分的放大图;Fig. 25 is an enlarged view of the handprint collection part of the non-contact handprint collection device shown in Fig. 22;
图26为图25所示的手纹采集部分的俯视图;Fig. 26 is a top view of the handprint collection part shown in Fig. 25;
图27为图25所示的手纹采集部分的主视图;Fig. 27 is a front view of the handprint collection part shown in Fig. 25;
图28为图25所示的手纹采集部分的侧视图;Fig. 28 is a side view of the handprint collection part shown in Fig. 25;
图29为图25所示的手纹采集部分的后视图;Fig. 29 is a rear view of the handprint collection part shown in Fig. 25;
图30示出根据本申请一个实施例的非接触式目标对象手纹的采集方法的示意性流程图;Fig. 30 shows a schematic flowchart of a non-contact target object handprint collection method according to an embodiment of the present application;
图31A示出了根据本申请一个实施例的结构光通道图像的示意图;Fig. 31A shows a schematic diagram of a structured light channel image according to an embodiment of the present application;
图31B示出根据本申请另一个实施例的结构光中的结构光重复单元的排列示意图;Fig. 31B shows a schematic diagram of the arrangement of structured light repeating units in structured light according to another embodiment of the present application;
图32A示出了根据本申请另一个实施例的结构光通道图像的示意图;Fig. 32A shows a schematic diagram of a structured light channel image according to another embodiment of the present application;
图32B示出根据本申请另一个实施例的结构光中的结构光重复单元的排列示意图;Fig. 32B shows a schematic diagram of arrangement of structured light repeating units in structured light according to another embodiment of the present application;
图33示出了根据本申请一个实施例的非结构光通道图像中处理对象的示意图;Fig. 33 shows a schematic diagram of processing objects in an unstructured light channel image according to an embodiment of the present application;
图34示出了根据本申请一个实施例的非结构光通道图像的示意图;以及Figure 34 shows a schematic diagram of an unstructured light channel image according to one embodiment of the present application; and
图35示出了根据本申请一个实施例的非接触式手纹采集设备的示意性框图。Fig. 35 shows a schematic block diagram of a non-contact handprint collection device according to an embodiment of the present application.
具体实施方式Detailed ways
本领域技术人员可以了解,如下描述仅示例性地示出了本申请的优选实施例,本申请可以无需一个或多个这样的细节而得以实施。此外,为了避免与本申请发生混淆,对于本领域公知的一些技术特征未进行详细描述。It will be appreciated by those skilled in the art that the following description is merely illustrative of preferred embodiments of the application and that the application may be practiced without one or more of these details. In addition, in order to avoid confusion with the present application, some technical features known in the art are not described in detail.
为了至少部分地解决上述现有的手纹采集技术对不同人群的兼顾性不够好的技术问题,本申请实施例提供一种手纹采集系统。该手纹采集系统包括照明系统、结构光投射器、一个或多个图像采集装置和处理装置。In order to at least partly solve the technical problem that the above-mentioned existing handprint collection technology is not good enough for different groups of people, an embodiment of the present application provides a handprint collection system. The handprint collection system includes an illumination system, a structured light projector, one or more image collection devices and a processing device.
照明系统包括一个或多个蓝色光源和一个或多个绿色光源,每个光源用于朝向手纹采集区发光,手纹采集区用于放置拍摄对象,拍摄对象包括 单指、双拇指、四指、平掌、侧掌中至少一者。The lighting system includes one or more blue light sources and one or more green light sources, each light source is used to emit light toward the handprint collection area, and the handprint collection area is used to place the object to be photographed, and the object to be photographed includes At least one of single finger, double thumb, four fingers, flat palm, and side palm.
结构光投射器用于向手纹采集区发射结构光。The structured light projector is used for emitting structured light to the handprint collection area.
一个或多个图像采集装置用于在照明系统和结构光投射器向手纹采集区发光的同时采集手纹采集区的图像以获得拍摄对象的手纹图像。One or more image acquisition devices are used to collect images of the handprint collection area while the illumination system and the structured light projector emit light to the handprint collection area to obtain a handprint image of the subject.
处理装置用于控制照明系统和结构光投射器发光并控制一个或多个图像采集装置采集手纹图像;处理装置还用于对一个或多个图像采集装置采集的手纹图像进行处理,得到一个或多个图像采集装置各自对应的结构光图像和照明光图像;对照明光图像进行融合处理和基于结构光图像的变换处理,得到处理后的手纹图像;其中,照明光图像包括蓝光图像(即下述蓝色通道图像)和绿光图像(即下述绿色通道图像),用于进行融合处理的图像包括至少一个蓝光图像和至少一个绿光图像,用于进行融合处理的图像所对应的图像采集装置相同、拍摄对象相同、拍摄(即采集)时间间隔在间隔范围内、拍摄的照明条件不同,照明条件不同包括照明系统的光源颜色和光照角度中至少一者不同。The processing device is used to control the lighting system and the structured light projector to emit light and control one or more image acquisition devices to collect handprint images; the processing device is also used to process the handprint images collected by one or more image acquisition devices to obtain a Or the respective structured light image and illumination light image corresponding to a plurality of image acquisition devices; the illumination light image is fused and processed based on the transformation process of the structured light image to obtain the processed handprint image; wherein the illumination light image includes a blue light image ( That is, the following blue channel image) and the green light image (ie, the following green channel image), the images used for fusion processing include at least one blue light image and at least one green light image, and the images used for fusion processing correspond to The image acquisition devices are the same, the shooting objects are the same, the shooting (i.e. collecting) time interval is within the interval range, and the shooting lighting conditions are different. Different lighting conditions include at least one of the color of the light source and the lighting angle of the lighting system.
蓝色光源和绿色光源可以各自为多个,并且可以以不同角度设置。如此,通过不同角度蓝色光源的开闭和/或亮度调整,即可组合出不同蓝光照明条件;同样可组合出不同绿光照明条件。There may be multiple blue light sources and green light sources, and they may be arranged at different angles. In this way, by switching on and off the blue light source at different angles and/or adjusting the brightness, different blue light lighting conditions can be combined; similarly, different green light lighting conditions can be combined.
间隔范围可以是预设时间间隔范围,其可以根据需要设定为合适大小。间隔范围可以表示为例如[t1,t2],t1可以是0秒,t2可以是0.1秒、0.2秒、0.3秒或其他接近0的时间值。用于进行融合处理的图像的拍摄时间可以是相同的或者非常接近的,即可以将间隔范围的上限设置为接近0的时间值。The interval range may be a preset time interval range, which may be set to an appropriate size as required. The interval range may be expressed as [t 1 , t 2 ], for example, t 1 may be 0 second, and t 2 may be 0.1 second, 0.2 second, 0.3 second or other time values close to 0. The shooting time of the images used for fusion processing may be the same or very close, that is, the upper limit of the interval range may be set to a time value close to 0.
处理装置可以执行以下操作:1、用于控制照明系统和结构光投射器发光并控制一个或多个图像采集装置采集手纹图像。也就是说,处理装置可用于控制拍摄。控制拍摄的方式可以包括预先设置多个不同的照明条件进行拍摄,例如在t1在第一照明条件下拍摄P1,t2在第二照明条件下拍摄P2,t3在第二照明条件下拍摄P3,后续可融合拍摄的P1、P2、P3中质量较好的部分。控制拍摄的方式也可以包括先拍摄,再根据拍摄得到的图像对照明条件进行调整后拍摄。例如,在t1在第一照明条件下拍摄P1,通过图像分析确定P1左下角较暗,则可增加P1左下角相应光照强度。控制拍摄的方式还可以包括通过预览图像采集装置确定拍摄对象的姿态是否合格,合格后再启动拍摄。例如,可以在仅使用结构光光源的情况下拍摄结构光图像,通过结构光图像判断拍摄对象的姿态,也可以在使用结构光光源和照明光光源的情况下拍摄图像,通过结构光图像和照明光图像拍摄对象的姿态。2、用于对一个或多个图像采集装置采集的手纹图像进行处理,得到一个或多个图像采集装置各自对应的结构光图像和照明光图像;例如,拍摄获得的图像为RGB三通道图像,其中红色通道的图像为结构光图像,蓝色和绿色通道的图像为照明光图像;3、基于照明光图像和结构光图像进行融合处理、变换处理等后续处理。进行后续处理时,可以是将采集到的 全部图像都用来做后续处理,也可以是基于图像质量、拍摄对象姿态、目标图像采集装置(例如高度合适的图像采集装置)筛选出一部分图像用于后续处理。The processing device can perform the following operations: 1. It is used to control the lighting system and the structured light projector to emit light and control one or more image acquisition devices to collect handprint images. That is, the processing means can be used to control the capture. The way to control shooting may include pre-setting multiple different lighting conditions for shooting, for example, shooting P1 under the first lighting condition at t1, shooting P2 under the second lighting condition at t2, shooting P3 under the second lighting condition at t3, and then The better quality parts of P1, P2, and P3 can be merged and shot. The method of controlling shooting may also include shooting first, and then shooting after adjusting the lighting conditions according to the captured image. For example, when P1 is photographed under the first lighting condition at t1, and it is determined through image analysis that the lower left corner of P1 is darker, the corresponding light intensity at the lower left corner of P1 may be increased. The method of controlling the shooting may also include determining whether the posture of the shooting object is qualified through the preview image acquisition device, and then starting the shooting after passing it. For example, the structured light image can be taken under the condition of only using the structured light source, and the attitude of the subject can be judged by the structured light image, or the image can be taken under the condition of using the structured light source and the illumination light source, and the structured light image and the illumination The light image captures the pose of the subject. 2. It is used to process the handprint images collected by one or more image acquisition devices, and obtain the corresponding structured light images and illumination light images of one or more image acquisition devices; for example, the images obtained by shooting are RGB three-channel images , wherein the image of the red channel is a structured light image, and the images of the blue and green channels are illumination light images; 3. Perform subsequent processing such as fusion processing and transformation processing based on the illumination light image and the structured light image. For subsequent processing, the collected All images are used for subsequent processing, or a part of images may be selected for subsequent processing based on image quality, subject pose, and target image acquisition device (for example, a highly suitable image acquisition device).
对照明光图像进行融合处理可以包含蓝色光源和绿色光源下的图像的融合,并且可选地还可以包含同色光源不同光照角度下的图像的融合。同色光源相同光照角度、并且不同光照强度下的图像可以不进行融合。The fusion processing of the illumination light images may include fusion of images under blue light source and green light source, and optionally may also include fusion of images under different illumination angles of the same color light source. Images under the same light source with the same light angle and different light intensities may not be fused.
例如,对于相机C1在时刻t1拍摄的蓝光图像B1,相机C1在时刻t2拍摄的蓝光图像B2(B1、B2光照角度不同,如果B1、B2只是光照强度不同,则可以不用融合),相机C1在时刻t1拍摄的绿光图像G1,相机C1在时刻t2拍摄的绿光图像G2(G1、G2光照角度不同),融合的几种情形包括但不限于:For example, for the blue-light image B1 taken by camera C1 at time t1, and the blue-light image B2 taken by camera C1 at time t2 (the illumination angles of B1 and B2 are different, if B1 and B2 are only different in light intensity, fusion is not required), camera C1 The green light image G1 taken at time t1, and the green light image G2 taken by camera C1 at time t2 (G1 and G2 have different illumination angles), several fusion scenarios include but are not limited to:
情形1、用于融合的图像可以是B1、G1,融合得到BG1。此时用于融合的图像对应的相机是相机C1。In case 1, the images used for fusion can be B1 and G1, and BG1 is obtained through fusion. At this time, the camera corresponding to the image used for fusion is camera C1.
情形2、用于融合的图像可以是B1、G1、B2、G2:可以先融合B1+G1=BG1,B2+G2=BG2,然后再融合BG1和BG2(BG1和BG2都是蓝绿融合后的图像,此时属于同色不同光照角度的融合)。在融合B1、G1、B2、G2时,优选先融合同时拍摄的B1和G1、B2和G2,而不是先融合B1和B2,这因为B1和B2拍摄时间不同,拍摄对象在t1-t2之间会有一定位移。此时用于融合的图像对应的相机是相机C1。Case 2, the images used for fusion can be B1, G1, B2, G2: you can first fuse B1+G1=BG1, B2+G2=BG2, and then fuse BG1 and BG2 (BG1 and BG2 are all blue-green fusion image, which belongs to the fusion of the same color and different lighting angles). When fusing B1, G1, B2, and G2, it is preferable to fuse B1 and G1, B2 and G2 that were shot at the same time first, instead of fusing B1 and B2 first, because B1 and B2 are shot at different times, and the shooting object is between t1-t2 There will be a certain displacement. At this time, the camera corresponding to the image used for fusion is camera C1.
情形3、用于融合的图像可以是B1经过变换得到的B1'以及G1经过重建得到的G1':B1'+G1'=BG1'。此时用于融合的图像对应的相机是相机C1。In case 3, the images used for fusion may be B1' obtained by transforming B1 and G1' obtained by reconstructing G1: B1'+G1'=BG1'. At this time, the camera corresponding to the image used for fusion is camera C1.
情形4、假设相机C2也在t1时刻拍摄了B1、G1,在t2时刻拍摄了B2、G2。此时用于融合的图像可以是:相机C1拍摄的B1变换后得到的B1'和相机C2拍摄的B1变换后得到的B1'拼接后得到的B1P;相机C1拍摄的G1变换后得到的G1'和相机C2拍摄的G1变换后得到的G1'拼接后得到的G1P。将可以将B1P和G1P融合。此时用于融合的B1P对应的相机是相机C1和相机C2,G1P对应的相机是相机C1和相机C2,B1P和G1P对应的相机相同。Situation 4. It is assumed that camera C2 also photographed B1 and G1 at time t1, and photographed B2 and G2 at time t2. At this time, the image used for fusion can be: B1' obtained after B1 transformation taken by camera C1 and B1' obtained after B1 transformation taken by camera C2 and B1P obtained after splicing; G1' obtained after G1 transformation taken by camera C1 G1P obtained after splicing with G1' obtained after transformation of G1 captured by camera C2. It will be possible to fuse B1P and G1P. At this time, the cameras corresponding to B1P used for fusion are camera C1 and camera C2, the cameras corresponding to G1P are camera C1 and camera C2, and the cameras corresponding to B1P and G1P are the same.
示例性地,蓝色光源发射的蓝光的波长小于430nm,绿色光源发射的绿光的波长大于540nm。Exemplarily, the wavelength of the blue light emitted by the blue light source is less than 430 nm, and the wavelength of the green light emitted by the green light source is greater than 540 nm.
示例性地,对照明光图像进行融合处理和基于结构光图像的变换处理,可以包括:对同一图像采集装置拍摄的至少一个照明光图像中的蓝光图像和绿光图像进行融合处理,得到融合图像;基于至少一个照明光图像所对应的结构光图像,对融合图像进行变换处理,得到处理后的手纹图像。Exemplarily, performing fusion processing on the illumination light image and transformation processing based on the structured light image may include: performing fusion processing on the blue light image and the green light image in at least one illumination light image captured by the same image acquisition device to obtain a fusion image ; Based on the structured light image corresponding to at least one illumination light image, transforming the fused image to obtain a processed handprint image.
这种方案是先将同一图像采集装置拍摄的蓝光图像和绿光图像进行融合,再对融合图像进行变换处理。变换处理可以是诸如下文描述的基于重建结构光图案进行的重建变换。 This solution is to first fuse the blue-light image and the green-light image taken by the same image acquisition device, and then transform the fused image. The transformation process may be a reconstruction transformation based on reconstructed structured light patterns such as described below.
示例性地,对照明光图像进行融合处理和基于结构光图像的变换处理,可以包括:基于同一图像采集装置拍摄的至少一个照明光图像所对应的结构光图像,对至少一个照明光图像中的蓝色图像进行变换处理,得到变换后的蓝色图像,基于同一图像采集装置拍摄的至少一个照明光图像所对应的结构光图像,对至少一个照明光图像中的绿色图像进行变换处理,得到变换后的绿色图像,对变换后的蓝色图像和变换后的绿色图像进行融合处理,得到处理后的手纹图像。Exemplarily, performing the fusion processing on the illumination light image and the transformation processing based on the structured light image may include: based on the structured light image corresponding to at least one illumination light image captured by the same image acquisition device, for the at least one illumination light image The blue image is transformed to obtain the transformed blue image, based on the structured light image corresponding to at least one illumination light image taken by the same image acquisition device, the green image in the at least one illumination light image is transformed to obtain the transformed The transformed green image is merged with the transformed blue image and the transformed green image to obtain the processed handprint image.
这种方案是先对蓝光图像和绿光图像进行变换处理之后再将变换后的图像进行融合。In this solution, the blue-ray image and the green-light image are transformed first, and then the transformed images are fused.
示例性地,至少一个照明光图像包括第一时刻拍摄的第一照明光图像和第二时刻拍摄的第二照明图像;对同一图像采集装置拍摄的至少一个照明光图像中的蓝光图像和绿光图像进行融合处理,得到融合图像,包括:对同一图像采集装置在第一时刻拍摄的第一照明光图像中的第一蓝光图像和第一绿光图像进行融合处理,得到第一蓝绿融合图像,对同一图像采集装置在第二时刻拍摄的第二照明光图像中的第二蓝光图像和第二绿光图像进行融合处理,得到第二蓝绿融合图像,对第一蓝绿融合图像和第二蓝绿融合图像进行融合处理,得到融合图像。Exemplarily, at least one illumination light image includes a first illumination light image captured at a first moment and a second illumination image captured at a second moment; the blue light image and the green light image in at least one illumination light image captured by the same image acquisition device Performing fusion processing on the images to obtain a fusion image, including: performing fusion processing on the first blue-light image and the first green-light image in the first illumination light image captured by the same image acquisition device at the first moment, to obtain the first blue-green fusion image , performing fusion processing on the second blue-light image and the second green-light image in the second illumination light image captured by the same image acquisition device at the second moment to obtain a second blue-green fusion image, for the first blue-green fusion image and the second The two blue-green fused images are fused to obtain a fused image.
这种方案是将B1、G1、B2、G2这4张图像融合后(先B1+G1、B2+G2,再BG1+BG2),再对融合后的图像进行变换处理。This solution is to fuse the four images of B1, G1, B2, and G2 (first B1+G1, B2+G2, and then BG1+BG2), and then transform the fused images.
示例性地,至少一个照明光图像包括第一时刻拍摄的第一照明光图像和第二时刻拍摄的第二照明图像;变换后的蓝色图像包括由第一照明光图像中的第一蓝色图像确定的第一变换蓝色图像和由第二照明光图像中的第二蓝色图像确定的第二变换蓝色图像,变换后的绿色图像包括由第一照明光图像中的第一绿色图像确定的第一变换绿色图像和由第二照明光图像中的第二绿色图像确定的第二变换绿色图像,对变换后的蓝色图像和变换后的绿色图像进行融合处理,得到处理后的手纹图像,包括:对第一变换蓝色图像和第一变换绿色图像进行融合处理,得到第一变换融合图像,对第二变换蓝光图像和第二变换绿光图像进行融合处理,得到第二变换融合图像,对第一变换融合图像和第二变换融合图像进行融合处理,得到处理后的手纹图像。Exemplarily, at least one illumination light image includes a first illumination light image taken at a first moment and a second illumination image taken at a second moment; the transformed blue image includes the first blue color in the first illumination light image A first transformed blue image determined from the image and a second transformed blue image determined from the second blue image in the second illuminated light image, the transformed green image comprising the first green image determined from the first illuminated light image The determined first transformed green image and the second transformed green image determined by the second green image in the second illumination light image are fused with the transformed blue image and transformed green image to obtain the processed hand The texture image includes: performing fusion processing on the first transformed blue image and the first transformed green image to obtain the first transformed fusion image, and performing fusion processing on the second transformed blue image and the second transformed green image to obtain the second transformed image The images are fused, performing fusion processing on the first transformation fusion image and the second transformation fusion image to obtain a processed handprint image.
这种方案是通过变换处理得到B1'、B2'、G1'、G2'之后,再融合4张图像(先B1'+G1'得到BG1'、B2'+G2得到BG2',再BG1'+BG2')。This scheme is to obtain B1', B2', G1', G2' through transformation processing, and then fuse 4 images (first B1'+G1' to get BG1', B2'+G2 to get BG2', and then BG1'+BG2 ').
示例性地,照明光图像包括同一图像采集装置在第一时刻拍摄的第一照明光图像和第二时刻拍摄的第二照明图像;对照明光图像进行融合处理和基于结构光图像的变换处理,得到处理后的手纹图像,包括:对第一照明光图像中的第一蓝色图像和第一照明光图像中的第一绿色图像进行融合处理,得到第一蓝绿融合图像;对第二照明光图像中的第二蓝色图像和第二照明光图像中的第二绿色图像进行融合处理,得到第二蓝绿融合图像; 基于第一照明光图像对应的第一结构光图像,对第一蓝绿融合图像进行变换处理,得到第一融合变换图像;基于第二照明光图像对应的第二结构光图像,对第二蓝绿融合图像进行变换处理,得到第二融合变换图像;对第一融合变换图像和第二融合变换图像进行融合处理,得到处理后的手纹图像。Exemplarily, the illumination light image includes a first illumination light image taken by the same image acquisition device at the first moment and a second illumination image taken at the second moment; fusion processing and transformation processing based on the structured light image are performed on the illumination light image, Obtaining the processed handprint image includes: performing fusion processing on the first blue image in the first illumination light image and the first green image in the first illumination light image to obtain the first blue-green fusion image; performing fusion processing on the second blue image in the illumination light image and the second green image in the second illumination light image to obtain a second blue-green fusion image; Based on the first structured light image corresponding to the first illumination light image, the first blue-green fusion image is transformed to obtain the first fusion transformation image; based on the second structured light image corresponding to the second illumination light image, the second blue-green fusion image is obtained. performing transformation processing on the green fusion image to obtain a second fusion transformation image; performing fusion processing on the first fusion transformation image and the second fusion transformation image to obtain a processed handprint image.
这种方案是:B1+G1=BG1,然后对BG1进行变换处理,使得BG1->(BG1)';B2+G2=BG2,然后对BG2进行变换处理,使得BG2->(BG2)';随后将(BG1)'+(BG2)'进行融合:(BG1)'+(BG2)'。This scheme is: B1+G1=BG1, then BG1 is transformed so that BG1->(BG1)'; B2+G2=BG2, then BG2 is transformed so that BG2->(BG2)'; then Fusion (BG1)'+(BG2)': (BG1)'+(BG2)'.
示例性地,融合处理包括计算融合和/或神经网络模型融合;计算融合包括:计算用于进行融合处理的图像中第一位置的像素的清晰度指标,将用于进行融合处理的图像中清晰度指标更好的第一位置的像素的像素值作为融合后所得图像中第一位置的像素的像素值;神经网络模型融合包括:将用于进行融合处理的图像中第二位置的像素输入神经网络模型,得到融合后所得图像中第二位置的像素的像素值。Exemplarily, the fusion processing includes calculation fusion and/or neural network model fusion; calculation fusion includes: calculating the sharpness index of the pixel at the first position in the image used for fusion processing, and using the sharpness index in the image for fusion processing The pixel value of the pixel at the first position with a better index is used as the pixel value of the pixel at the first position in the obtained image after fusion; the neural network model fusion includes: inputting the pixel at the second position in the image for fusion processing into the neural network The network model obtains the pixel value of the pixel at the second position in the fused image.
例如,分别计算蓝色图像和绿色图像中第一位置处的像素的清晰度指标,如果蓝色图像中第一位置的像素的清晰度指标优于绿色图像中第一位置像素的清晰度指标,则融合图像中第一位置的像素的像素值为蓝色图像中第一位置的像素的像素值。For example, the sharpness index of the pixel at the first position in the blue image and the green image are respectively calculated, if the sharpness index of the pixel at the first position in the blue image is better than the sharpness index of the pixel at the first position in the green image, Then the pixel value of the pixel at the first position in the fused image is the pixel value of the pixel at the first position in the blue image.
第一位置清晰度指标的计算方式例如为,计算第一位置像素的像素值与第一位置周围邻域内像素的像素值的标准差,将标准差作为第一位置像素对应的清晰度指标。标准差越大说明第一位置周围邻域内像素值变化越剧烈,清晰度越高。或者,将第一位置内各像素的像素值之间的标准差作为第一位置像素对应的清晰度指标。示例性地,清晰度指标还可以采用下述质量评分来表示。The calculation method of the sharpness index at the first position is, for example, calculating the standard deviation between the pixel value of the pixel at the first position and the pixel value of the pixels in the neighborhood around the first position, and using the standard deviation as the sharpness index corresponding to the pixel at the first position. The larger the standard deviation, the more drastic the change of the pixel value in the neighborhood around the first position, and the higher the definition. Alternatively, the standard deviation between the pixel values of the pixels in the first position is used as the sharpness index corresponding to the pixel in the first position. Exemplarily, the sharpness index may also be represented by the following quality score.
第一位置处的像素可以是单个像素或一个图像区域内的多个像素(例如3*3区域),以进行像素级/图像区域级的图像融合。第一位置处的像素为单个像素时,可依次将图像中各像素位置作为第一位置;第一位置处的像素一个图像区域内的多个像素时,可将图像划分为多个区域,依次将各区域作为第一位置。The pixel at the first position may be a single pixel or a plurality of pixels in an image area (for example, a 3*3 area), so as to perform image fusion at the pixel level/image area level. When the pixel at the first position is a single pixel, each pixel position in the image can be taken as the first position in turn; when the pixel at the first position is multiple pixels in one image area, the image can be divided into multiple areas, and the Use each region as the first location.
示例性地,可以将用于进行融合处理的图像中全部位置作为第二位置或者将需要进行融合的部分位置作为第二位置(例如将不需要融合的位置掩掉)进行上述神经网络模型融合。例如,将需要进行融合的B1、G1、B2、G2输入神经网络,得到融合图像。Exemplarily, all the positions in the image used for fusion processing may be used as the second positions or some positions that need to be fused are used as the second positions (for example, the positions that do not need to be fused are masked) to perform the above neural network model fusion. For example, B1, G1, B2, and G2 that need to be fused are input into the neural network to obtain a fused image.
可选地,在融合后,可以对融合后所得图像中的来自不同用于进行融合处理的图像的边界区域进行像素值优化。通过像素值优化,可以使得连接处更平滑、保持连续性。例如,第一位置为3*3的区域,整个图像为9*9,整个图像包含1-9 9个第一位置,例如融合图像中第1个第一位置来自蓝色图像,融合图像中第2个第一位置来自绿色图像,则第1-3行第3列和 第1-3行第4列的像素是边界区域的像素,可对边界区域的像素进行像素值优化。Optionally, after fusion, pixel value optimization may be performed on boundary regions in the image obtained after fusion from different images used for fusion processing. Through pixel value optimization, the connection can be made smoother and continuous. For example, the first position is a 3*3 area, the entire image is 9*9, and the entire image contains 1-9 9 first positions, for example, the first first position in the fusion image comes from the blue image, and the first position in the fusion image The 2 first positions are from the green image, then rows 1-3 column 3 and The pixels in the 1st-3rd row and the 4th column are the pixels in the border area, and the pixel value optimization can be performed on the pixels in the border area.
计算融合和神经网络模型融合可以任意选择其中之一执行或将二者组合。示例性地,融合处理包括计算融合和神经网络模型融合,计算融合包括:计算用于进行融合处理的图像中第一位置的像素的清晰度指标,若用于进行融合处理的多个图像中清晰度指标的差距大于差距阈值,则将用于进行融合处理的图像中清晰度指标更好的第一位置的像素的像素值作为融合后所得图像中第一位置的像素的像素值;神经网络模型融合包括:将用于进行融合处理的图像中第二位置的像素输入神经网络模型,得到融合后所得图像中第二位置的像素的像素值;第二位置为用于进行融合处理的多个图像中清晰度指标的差距不大于差距阈值的位置。Computational fusion and neural network model fusion can be performed arbitrarily or combined. Exemplarily, the fusion process includes calculation fusion and neural network model fusion, and the calculation fusion includes: calculating the sharpness index of the pixel at the first position in the image used for fusion processing, if the multiple images used for fusion processing are clear If the gap of the sharpness index is greater than the gap threshold, the pixel value of the pixel at the first position with a better sharpness index in the image for fusion processing is used as the pixel value of the pixel at the first position in the obtained image after fusion; the neural network model The fusion includes: inputting the pixel at the second position in the image for fusion processing into the neural network model to obtain the pixel value of the pixel at the second position in the image obtained after fusion; the second position is a plurality of images used for fusion processing The position where the gap of the medium sharpness index is not greater than the gap threshold.
计算融合和神经网络模型融合可以结合起来用。例如,对于蓝光图像和绿光图像中同一位置处的像素,蓝光图像中像素的清晰度指标与绿光图像中像素的清晰度指标的差距大于差距阈值,则说明蓝光图像中该位置处像素的清晰度显著好于绿光图像中该位置处像素的清晰度,则将蓝光图像中该位置处像素的像素值作为融合图像中该位置处像素的像素值;反之,若绿光图像中像素的清晰度指标与蓝光图像中像素的清晰度指标的差距大于差距阈值,则说明绿光图像中该位置处像素的清晰度显著好于蓝光图像中该位置处像素的清晰度,将绿光图像中该位置处像素的像素值作为融合图像中该位置处像素的像素值。若二者的差距小于阈值,则说明二者清晰度指标近似,此时可以用神经网络模型融合方式确定融合后的图像位于该位置处的像素值。示例性的,仍以融合图像为9*9的图像为例,则融合图像包含81个像素。若对于第1-40号位置处像素,蓝色图像清晰度指标显著优于绿色图像,第51-81号位置处像素,绿色图像清晰度指标明显优于蓝色图像,第41-50号位置处像素,蓝色图像清晰度指标与绿色图像清晰度指标类似,则将蓝色图像除第41-50号位置以外的位置遮盖(例如设置为0)、绿色图像除第41-50号位置以外的位置遮盖(例如设置为0)后输入神经网络模型融合,得到融合图像中的第41-50号像素。融合图像中第1-40号位置处像素来自蓝色图像,第51-81号位置处像素来自绿色图像。可理解的是,将计算融合和神经网络模型融合结合使用后,同样可对融合图像中来自边界区域的像素值进行优化。Computational fusion and neural network model fusion can be used in combination. For example, for the pixels at the same position in the blue-light image and the green-light image, if the difference between the sharpness index of the pixel in the blue-light image and the sharpness index of the pixel in the green-light image is greater than the gap threshold, it means that the pixel at this position in the blue-light image is If the sharpness is significantly better than that of the pixel at this position in the green light image, then the pixel value of the pixel at this position in the blue light image is used as the pixel value of the pixel at this position in the fusion image; on the contrary, if the pixel value of the pixel at this position in the green light image If the difference between the sharpness index and the sharpness index of the pixel in the blue-ray image is greater than the gap threshold, it means that the sharpness of the pixel at this position in the green-light image is significantly better than that of the pixel at this position in the blue-light image. The pixel value of the pixel at the position is used as the pixel value of the pixel at the position in the fused image. If the difference between the two is smaller than the threshold, it means that the sharpness indicators of the two are similar. At this time, the neural network model fusion method can be used to determine the pixel value of the fused image at the position. Exemplarily, still taking the 9*9 fused image as an example, the fused image includes 81 pixels. For pixels at positions 1-40, the clarity index of the blue image is significantly better than that of the green image, and for pixels at positions 51-81, the clarity index of the green image is significantly better than that of the blue image, and at positions 41-50 pixel, and the blue image definition index is similar to the green image definition index, then cover the blue image except for positions 41-50 (for example, set it to 0), and the green image except for positions 41-50 The position of is masked (for example, set to 0) and input to the neural network model fusion to obtain the 41st-50th pixels in the fusion image. The pixels at positions 1-40 in the fused image are from the blue image, and the pixels at positions 51-81 are from the green image. It is understandable that, after the combination of computational fusion and neural network model fusion, the pixel values from the boundary area in the fused image can also be optimized.
用于进行图像融合的神经网络模型可以采用已有的模型架构和训练方法。例如,将用计算方法获得的融合图像作为真实值,用神经网络模型的预测融合图像和真实值之间的差确定损失值,用损失值更新神经网络模型。再例如,对神经网络模型预测的融合图像进行专家打分,使神经网络模型能够调整网络参数以获得更好的分数。The neural network model used for image fusion can adopt existing model architecture and training methods. For example, the fused image obtained by the calculation method is used as the real value, the difference between the predicted fused image of the neural network model and the real value is used to determine the loss value, and the neural network model is updated with the loss value. Another example is to perform expert scoring on the fused images predicted by the neural network model, so that the neural network model can adjust network parameters to obtain better scores.
上述一个或多个图像采集装置至少有以下三种设置方案:The above-mentioned one or more image acquisition devices have at least the following three setting schemes:
A1:可以设置三个图像采集装置,用于采集单指的图像。例如,可以 通过三个光轴彼此成一定角度的图像采集装置分别采集单指的指纹图像,并可以将三个图像采集装置分别采集的指纹图像进行拼接以模拟滚动捺印指纹。A1: Three image capture devices can be set up to capture images of a single finger. For example, you can The fingerprint images of a single finger are respectively collected by three image acquisition devices whose optical axes are at a certain angle to each other, and the fingerprint images respectively collected by the three image acquisition devices can be spliced to simulate rolling fingerprints.
A2:可以设置六个图像采集装置,包括三个光轴彼此成角度的图像采集装置和三个光轴基本平行的图像采集装置,另外可选地设置有一个预览图像采集装置。上述六个相机以及可选的预览图像采集装置可以用于采集单指、双拇指、四指、掌纹。A2: Six image acquisition devices can be provided, including three image acquisition devices whose optical axes are at an angle to each other and three image acquisition devices whose optical axes are substantially parallel, and optionally a preview image acquisition device. The above six cameras and the optional preview image collection device can be used to collect single finger, double thumb, four finger, and palm prints.
这种情况下,采集单指时,三个光轴彼此成角度的图像采集装置采集的指纹图像可以进行拼接以模拟单指的滚动捺印指纹;采集双拇指和四指时,可以从三个光轴基本平行的图像采集装置中选择位置合适的图像采集装置用于采集指纹图像;采集掌纹时,可以采用三个光轴基本平行的图像采集装置分别采集掌纹图像,并将采集的掌纹图像拼接起来以获得完整的掌纹。In this case, when collecting a single finger, the fingerprint images collected by the image acquisition device with three optical axes at an angle to each other can be spliced to simulate the rolling fingerprint of a single finger; Among the image acquisition devices with substantially parallel axes, select an image acquisition device with a suitable position for collecting fingerprint images; when collecting palmprints, three image acquisition devices with substantially parallel optical axes can be used to collect palmprint images respectively, and the collected palmprints The images are stitched together to obtain a complete palm print.
A3:可以设置具有部分重叠的景深范围的多个(例如2个)图像采集装置,另外可选地设置一个预览的图像采集装置。上述具有部分重叠的景深范围的图像采集装置以及可选的预览图像采集装置可以用于采集双拇指和四指。采集双拇指和四指时,可以从具有部分重叠景深范围的多个图像采集装置中选择高度合适的图像采集装置所采集的图像进行处理。A3: Multiple (for example, 2) image capture devices with partially overlapping depth-of-field ranges can be provided, and one image capture device for preview can optionally be provided. The above-mentioned image capture means with partially overlapping depth-of-field ranges and optional preview image capture means can be used to capture two thumbs and four fingers. When capturing double thumbs and four fingers, images captured by an image capturing device with a suitable height can be selected from multiple image capturing devices with partially overlapping depth-of-field ranges for processing.
示例性地,一个或多个图像采集装置包括共同对同一拍摄对象进行拍摄的多个共同图像采集装置,对照明光图像进行融合处理和基于结构光图像的变换处理,得到处理后的手纹图像,包括:对照明光图像进行融合处理、基于结构光图像的变换处理、对共同图像采集装置所对应的变换处理后的图像进行拼接,得到处理后的手纹图像;拍摄对象包括单指、平掌或侧掌。Exemplarily, one or more image acquisition devices include a plurality of common image acquisition devices that jointly photograph the same subject, perform fusion processing on the illumination light image and transformation processing based on the structured light image, and obtain the processed handprint image , including: performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image; the shooting objects include single finger, flat Palm or side palm.
多个共同图像采集装置采集的图像可以拼接在一起以获得完整的手纹。拼接可以采用下文描述的各种图像拼接方案实现。示例性地,在需要获取单指滚动捺印指纹或获取掌纹的应用场景下,可以进行图像拼接。Images collected by multiple common image collection devices can be spliced together to obtain a complete handprint. Stitching can be achieved using various image stitching schemes described below. Exemplarily, in an application scenario where it is necessary to obtain a single-finger rolling fingerprint or obtain a palm print, image stitching can be performed.
对图像的融合、变换处理和拼接可依据多种顺序进行,其中拼接一定是针对变换处理后的图像进行的。下面对融合、变换和拼接步骤的执行顺序进行示例性说明。The fusion, transformation processing and splicing of the images can be performed in various orders, and the splicing must be performed on the transformed images. The execution order of the fusion, transformation and splicing steps is exemplified below.
示例性地,对照明光图像进行融合处理、基于结构光图像的变换处理、对共同图像采集装置所对应的变换处理后的图像进行拼接,得到处理后的手纹图像,可以包括:对于多个共同图像采集装置中每个图像采集装置,对该图像采集装置对应的至少一个照明光图像中的蓝色图像和绿色图像进行融合处理,得到该图像采集装置所对应的融合图像;基于该图像采集装置所对应的结构光图像,对该图像采集装置所对应的融合图像进行变换处理,得到该图像采集装置对应的变换融合图像;对多个共同图像采集装置对应的变换融合图像进行拼接,得到处理后的手纹图像。 Exemplarily, performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image may include: For multiple Each image acquisition device in the common image acquisition device performs fusion processing on the blue image and the green image in at least one illumination light image corresponding to the image acquisition device to obtain a fusion image corresponding to the image acquisition device; based on the image acquisition The structured light image corresponding to the device is transformed into the fusion image corresponding to the image acquisition device to obtain the transformation fusion image corresponding to the image acquisition device; the transformation fusion images corresponding to multiple common image acquisition devices are spliced to obtain the processed Post-print image.
这种方案属于先蓝绿融合、再变换、再拼接的方案。This scheme belongs to the scheme of blue-green fusion first, then transformation, and then splicing.
仍以图像采集装置C1拍摄的B1,G1,图像采集装置C2拍摄的B1,G1举例说明,其中C1和C2是共同图像采集装置。先将C1拍摄的B1和G1融合,得到C1拍摄的BG1,将C2拍摄的B1和G1融合,得到C2拍摄的BG1,对BG1和BG2进行变换处理,得到C1拍摄的BG1'和C2拍摄的BG1',对C1拍摄的BG1'和C2拍摄的BG1'进行拼接,得到处理后的手纹图像。Still taking B1 and G1 captured by the image acquisition device C1 and B1 and G1 captured by the image acquisition device C2 as an example, wherein C1 and C2 are common image acquisition devices. First fuse B1 and G1 shot by C1 to get BG1 shot by C1, fuse B1 and G1 shot by C2 to get BG1 shot by C2, transform BG1 and BG2 to get BG1' shot by C1 and BG1 shot by C2 ', splicing BG1' taken by C1 and BG1' taken by C2 to obtain the processed handprint image.
示例性地,对照明光图像进行融合处理、基于结构光图像的变换处理、对共同图像采集装置所对应的变换处理后的图像进行拼接,得到处理后的手纹图像,可以包括:对于多个共同图像采集装置中的每个图像采集装置,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的蓝色图像进行变换处理,得到该图像采集装置对应的变换蓝色图像,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的绿色图像进行变换处理,得到该图像采集装置对应的变换绿色图像,对该图像采集装置对应的变换蓝色图像和该图像采集装置对应的变换绿色图像进行蓝绿融合处理,得到该图像采集装置对应的变换融合图像;对多个共同图像采集装置对应的变换融合图像进行拼接,得到处理后的手纹图像。Exemplarily, performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image may include: For multiple Each image acquisition device in the common image acquisition device, based on the structured light image corresponding to the image acquisition device, performs conversion processing on the blue image in at least one illumination light image corresponding to the image acquisition device to obtain the image acquisition device For the corresponding converted blue image, based on the structured light image corresponding to the image acquisition device, the green image in at least one illumination light image corresponding to the image acquisition device is converted to obtain the converted green image corresponding to the image acquisition device, Perform blue-green fusion processing on the converted blue image corresponding to the image acquisition device and the converted green image corresponding to the image acquisition device to obtain a transformed fusion image corresponding to the image acquisition device; transform and fuse images corresponding to multiple common image acquisition devices Splicing is performed to obtain the processed handprint image.
这种方案属于先变换、再蓝绿融合、再拼接的方案。This scheme belongs to the scheme of transforming first, then blending blue and green, and then splicing.
仍以图像采集装置C1拍摄的B1,G1,图像采集装置C2拍摄的B1,G1举例说明,其中C1和C2是共同图像采集装置。先对C1拍摄的B1、G1,C2拍摄的B1、G1进行变换处理,得到C1拍摄的B1'、G1',C2拍摄的B1'、G1',再对C1拍摄的B1'和G1'进行蓝绿融合,得到C1拍摄的BG1',同样得到C2拍摄的BG1',对C1拍摄的BG1'和C2拍摄的BG1'进行拼接,得到处理后的手纹图像。Still taking B1 and G1 captured by the image acquisition device C1 and B1 and G1 captured by the image acquisition device C2 as an example, wherein C1 and C2 are common image acquisition devices. First, transform B1 and G1 taken by C1, B1 and G1 taken by C2 to obtain B1' and G1' taken by C1, B1' and G1' taken by C2, and then perform blueprinting on B1' and G1' taken by C1. Green fusion to obtain BG1' taken by C1, and BG1' taken by C2 are also obtained, and BG1' taken by C1 and BG1' taken by C2 are spliced to obtain a processed handprint image.
示例性地,对照明光图像进行融合处理、基于结构光图像的变换处理、对共同图像采集装置所对应的变换处理后的图像进行拼接,得到处理后的手纹图像,可以包括:对于多个共同图像采集装置中的每个图像采集装置,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的蓝色图像进行变换处理,得到该图像采集装置对应的变换蓝色图像,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的绿色图像进行变换处理,得到该图像采集装置对应的变换绿色图像;对多个共同图像采集装置对应的变换蓝色图像进行拼接,得到拼接变换蓝色图像;对多个图像采集装置对应的变换绿色图像进行拼接,得到拼接变换绿色图像;对拼接变换蓝色图像和拼接变换绿色图像进行融合处理,得到处理后的手纹图像。Exemplarily, performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain the processed handprint image may include: For multiple Each image acquisition device in the common image acquisition device, based on the structured light image corresponding to the image acquisition device, performs conversion processing on the blue image in at least one illumination light image corresponding to the image acquisition device to obtain the image acquisition device For the corresponding converted blue image, based on the structured light image corresponding to the image acquisition device, the green image in at least one illumination light image corresponding to the image acquisition device is converted to obtain the converted green image corresponding to the image acquisition device; Splicing the converted blue images corresponding to a plurality of common image acquisition devices to obtain a spliced and transformed blue image; splicing the transformed green images corresponding to a plurality of image acquisition devices to obtain a spliced and transformed green image; splicing and transforming the blue images and The green image is spliced and transformed for fusion processing, and the processed handprint image is obtained.
这种方案属于先变换、再拼接、再蓝绿融合的方案。相比于前两种方案,这种方案是比较可取的,因为需要融合的图像数量比较少、所使用的 图像分辨率也较低(原始图像分辨率更高),这样可以有效降低蓝绿融合的计算量。This scheme belongs to the scheme of transforming first, then splicing, and then blending blue and green. Compared with the previous two schemes, this scheme is preferable because the number of images to be fused is relatively small and the used The image resolution is also lower (the original image resolution is higher), which can effectively reduce the calculation amount of blue-green fusion.
仍以图像采集装置C1拍摄的B1,G1,图像采集装置C2拍摄的B1,G1举例说明,其中C1和C2是共同图像采集装置。先对C1拍摄的B1、G1,C2拍摄的B1、G1进行变换处理,得到C1拍摄的B1'、G1',C2拍摄的B1'、G1',再对C1拍摄的B1'和C2拍摄的B1'进行拼接,得到B1'P,对C1拍摄的G1和C2拍摄的G1'进行拼接,得到G1'P,对B1'P和G1'P进行蓝绿融合,得到处理后的手纹图像。Still taking B1 and G1 captured by the image acquisition device C1 and B1 and G1 captured by the image acquisition device C2 as an example, wherein C1 and C2 are common image acquisition devices. First, transform B1, G1 taken by C1, B1 and G1 taken by C2 to obtain B1' and G1' taken by C1, B1' and G1' taken by C2, and then B1' taken by C1 and B1 taken by C2 'Stitching to obtain B1'P, splicing G1 taken by C1 and G1' taken by C2 to obtain G1'P, performing blue-green fusion on B1'P and G1'P to obtain the processed handprint image.
示例性地,结构光投射器包括光源、图案产生单元和会聚透镜,图案产生单元设置在光源的前方,光源用于将图案产生单元上的图案投影到投射平面上,以在投射平面形成重建结构光图案和拼接结构光图案,会聚透镜设置在图案产生单元和投射平面之间的光传输路径上;重建结构光图案和拼接结构光图案不同,重建结构光图案中的重建结构光单元和拼接结构光图案中的拼接结构光单元之间不存在边界重叠;重建结构光单元用于基于结构光图像进行变换处理,拼接结构光单元用于对多个共同图像采集装置所对应的变换处理后的图像进行拼接。Exemplarily, the structured light projector includes a light source, a pattern generating unit and a converging lens, the pattern generating unit is arranged in front of the light source, and the light source is used to project the pattern on the pattern generating unit onto the projection plane to form a reconstruction structure on the projection plane The light pattern and the spliced structured light pattern, the converging lens is arranged on the light transmission path between the pattern generating unit and the projection plane; the reconstructed structured light pattern is different from the spliced structured light pattern, the reconstructed structured light unit and the spliced structure in the reconstructed structured light pattern There is no border overlap between the spliced structured light units in the light pattern; the reconstructed structured light unit is used to perform transformation processing based on the structured light image, and the spliced structured light unit is used to transform and process the image corresponding to multiple common image acquisition devices to splice.
重建变换和拼接可都使用重建结构光单元,也可使用不同的结构光单元进行重建变换和拼接。The reconstruction transformation and stitching can both use the reconstruction structured light unit, or use different structured light units for reconstruction transformation and stitching.
示例性地,重建结构光单元和拼接结构光单元满足以下条件中至少一者:重建结构光单元包括条纹;拼接结构光单元包括散点;拼接结构光单元的分布密度大于重建结构光单元的分布密度。Exemplarily, the reconstructed structured light unit and the stitched structured light unit meet at least one of the following conditions: the reconstructed structured light unit includes stripes; the stitched structured light unit includes scattered points; the distribution density of the stitched structured light unit is greater than the distribution of the reconstructed structured light unit density.
如此,可以用分布密度更小的重建结构光单元进行重建变换,提高重建速度,用分布密度更大的拼接结构光单元在局部精细拼接,提高拼接精度。In this way, reconstructed structured light units with a smaller distribution density can be used for reconstruction and transformation to improve the reconstruction speed, and stitched structured light units with a larger distribution density can be used for fine splicing locally to improve splicing accuracy.
示例性地,多个共同图像采集装置包括第一共同图像采集装置、第二共同图像采集装置和第三共同图像采集装置,第一共同图像采集装置的光轴与手纹采集区的朝向多个共同图像采集装置的平面垂直,第二共同图像采集装置的光轴与第一共同图像采集装置的光轴成第一预设夹角,第三共同图像采集装置的光轴与第一共同图像采集装置的光轴成第二预设夹角;拍摄对象包括单指。Exemplarily, a plurality of common image capture devices include a first common image capture device, a second common image capture device and a third common image capture device, the optical axis of the first common image capture device and the orientation of the handprint capture area are multiple The plane of the common image acquisition device is vertical, the optical axis of the second common image acquisition device forms a first preset angle with the optical axis of the first common image acquisition device, and the optical axis of the third common image acquisition device and the first common image acquisition The optical axis of the device forms a second preset included angle; the shooting object includes a single finger.
这种方案可以对应于上述图像采集装置的设置方案A1。第一共同图像采集装置、第二共同图像采集装置和第三共同图像采集装置可以应用于需要获取单指滚动捺印指纹的场景。第一共同图像采集装置、第二共同图像采集装置和第三共同图像采集装置可以分别是下文描述的第一图像采集装置、第二图像采集装置和第三图像采集装置。第一共同图像采集装置、第二共同图像采集装置和第三共同图像采集装置,这三者可以是下文描述的多个第五图像采集装置1830或者包含在下文描述的多个第五图像采集装置1830中。 This scheme may correspond to the setup scheme A1 of the image acquisition device described above. The first common image capture device, the second common image capture device and the third common image capture device can be applied to a scene where it is necessary to obtain a single-finger rolling fingerprint. The first common image capture device, the second common image capture device and the third common image capture device may be respectively the first image capture device, the second image capture device and the third image capture device described below. The first common image capture device, the second common image capture device and the third common image capture device, these three may be a plurality of fifth image capture devices 1830 described below or include a plurality of fifth image capture devices described below In 1830.
示例性地,多个共同图像采集装置包括第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置;第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置的镜头位于预定平面内且分别对准手纹采集区内的多个第一子区域,多个第一子区域中的任意相邻的两个彼此重叠或者邻接;拍摄对象包括平掌或侧掌。Exemplarily, the plurality of common image capture devices include a fourth common image capture device, a fifth common image capture device and a sixth common image capture device; the fourth common image capture device, the fifth common image capture device and the sixth common image capture device The lens of the collection device is located in a predetermined plane and is respectively aimed at a plurality of first sub-regions in the handprint collection area, and any adjacent two of the plurality of first sub-regions overlap or adjoin each other; the shooting objects include flat palms or side palm.
这种方案可以对应于上述图像采集装置的设置方案A2。第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置可以应用于需要获取掌纹的场景。当手纹采集设备仅设置第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置时,手纹采集设备可以采集掌纹,但不能用于采集单指模拟滚动捺印指纹。当进一步设置第一共同图像采集装置、第二共同图像采集装置和第三共同图像采集装置时,可以既采集单指滚动捺印指纹,又采集掌纹(和指纹)。This scheme may correspond to the arrangement scheme A2 of the above-mentioned image acquisition device. The fourth common image acquisition device, the fifth common image acquisition device and the sixth common image acquisition device can be applied to scenarios where palmprints need to be acquired. When the handprint collection device is only equipped with the fourth common image collection device, the fifth common image collection device and the sixth common image collection device, the handprint collection device can collect palmprints, but cannot be used to collect single-finger simulated rolling fingerprints. When the first common image acquisition device, the second common image acquisition device and the third common image acquisition device are further arranged, it is possible to collect single-finger rolling fingerprints and collect palm prints (and fingerprints).
第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置的光轴可以彼此平行设置,且三者可以位于同一预定平面内。第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置中相邻两个图像采集装置的图像采集范围(视野范围)可以部分重叠或者邻接。第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置中每个图像采集装置可以采集手纹采集区内的至少部分区域的图像,使得第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置可以相互配合采集整个手纹采集区内的图像。第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置,这三者可以是下文描述的多个第四图像采集装置1820或者包含在下文描述的多个第四图像采集装置1820中。The optical axes of the fourth common image capture device, the fifth common image capture device and the sixth common image capture device may be arranged parallel to each other, and the three may be located in the same predetermined plane. The image acquisition ranges (field of view) of two adjacent image acquisition devices among the fourth common image acquisition device, the fifth common image acquisition device and the sixth common image acquisition device may partially overlap or adjoin. Each image acquisition device in the fourth common image acquisition device, the fifth common image acquisition device and the sixth common image acquisition device can acquire images of at least part of the area in the handprint acquisition area, so that the fourth common image acquisition device, the fifth common image acquisition device The common image acquisition device and the sixth common image acquisition device can cooperate with each other to acquire images in the entire handprint acquisition area. The fourth common image capture device, the fifth common image capture device and the sixth common image capture device, these three can be a plurality of fourth image capture devices 1820 described below or include multiple fourth image capture devices described below In 1820.
示例性地,一个或多个图像采集装置包括各自对同一拍摄对象进行拍摄的多个独立图像采集装置,多个独立图像采集装置的可清晰成像物面子空间部分重叠,处理装置还用于根据预览图像采集装置采集的结构光图像和/或照明光图像从多个独立图像采集装置中确定可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置;用于进行融合处理的图像是由目标图像采集装置采集的;拍摄对象包括双拇指或四指;预览图像采集装置为多个独立图像采集装置中的全部图像采集装置,或者多个独立图像采集装置中视野最大的图像采集装置,或者不同于多个独立图像采集装置且焦距比多个独立图像采集装置更短的图像采集装置。Exemplarily, the one or more image acquisition devices include a plurality of independent image acquisition devices that each shoot the same object, the clearly imageable object surface subspaces of the plurality of independent image acquisition devices partially overlap, and the processing device is also used to The structured light image and/or illumination light image collected by the image acquisition device is determined from a plurality of independent image acquisition devices, and the target image acquisition device that can clearly image the subspace of the object surface and match the position of the photographed object is determined; the image used for fusion processing is obtained by Captured by the target image capture device; the shooting object includes two thumbs or four fingers; the preview image capture device is all image capture devices among multiple independent image capture devices, or the image capture device with the largest field of view among multiple independent image capture devices, or An image capture device that is different from multiple independent image capture devices and has a shorter focal length than the multiple independent image capture devices.
多个(例如3个)共同图像采集装置可以分别采集手指或手掌的一部分,多个共同图像采集装置采集的图像可以通过图像拼接来得到完整的手纹。Multiple (for example, 3) common image acquisition devices can respectively capture a part of fingers or palms, and the images captured by multiple common image acquisition devices can be spliced to obtain a complete handprint.
多个独立图像采集装置的应用场景可以是针对双拇指或四指的采集场景。可理解的是,针对双拇指或四指采集的场景,是指拍摄对象是双拇指或四指,需要同时采集双拇指或四指图像,处理后的手纹图像中可以是与 双拇指对应的两张单指图像或与四指对应的四张单指图像。当拍摄双拇指或四指时,各手指的位置、高度、角度在手纹采集区的放置情况比较灵活,如果仅采用单个图像采集装置进行采集,对用户的手在手纹采集区的放置位置、高度、角度会有诸多限制,降低用户体验。因此,希望通过多个图像采集装置组合得到更大的景深/视野,用户将手指放置在什么位置,就选择该位置对应的图像采集装置进行采集,如此可减少对用户的手放置位置、高度、角度的限制。,以双拇指或四指整体,或者双拇指和四指中的每个手指作为相机选择单元,对于每个相机选择单元,从多个独立图像采集装置中选择和该相机选择单元高度/位置匹配的一个图像采集装置采集的图像进行后续处理(而不需要多相机拼接),得到最终的手纹图像。例如,以单个手指作为相机选择单元,当四个手指分别对应多个独立图像采集装置一起使用可以得到更大的景深/视野。可清晰成像物面子空间在与图像采集装置的光轴平行的方向(例如纵向)上重叠时,可以通过多个独立图像采集装置获得更大的景深。可清晰成像物面子空间在与图像采集装置的光轴垂直的方向(例如横向)上重叠时,可以通过多个独立图像采集装置获得更大的视野范围。可清晰成像物面子空间在与图像采集装置的光轴平行的方向(例如纵向)上重叠时,“可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置”中的“拍摄对象位置”可以是拍摄对象的高度。可清晰成像物面子空间在与图像采集装置的光轴垂直的方向(例如横向)上重叠时,“可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置”中的“拍摄对象位置”可以是拍摄对象的水平位置。The application scenario of multiple independent image capture devices may be the capture scenario for two thumbs or four fingers. It is understandable that the scene of collecting two thumbs or four fingers means that the shooting object is two thumbs or four fingers, and the images of two thumbs or four fingers need to be collected at the same time, and the processed handprint image can be Two single-finger images for two thumbs or four single-finger images for four fingers. When taking pictures of two thumbs or four fingers, the position, height and angle of each finger can be placed in the handprint collection area more flexibly. , height, and angle will have many restrictions, which will reduce the user experience. Therefore, it is hoped that a greater depth of field/field of view can be obtained through the combination of multiple image acquisition devices. Where the user places the finger, the image acquisition device corresponding to the position is selected for acquisition, which can reduce the need for the user's hand placement, height, Angle restrictions. , using the two thumbs or four fingers as a whole, or each of the two thumbs and four fingers as the camera selection unit, for each camera selection unit, select from a plurality of independent image acquisition devices to match the height/position of the camera selection unit The images collected by an image acquisition device are processed subsequently (without the need for multi-camera stitching) to obtain the final handprint image. For example, using a single finger as a camera selection unit, when four fingers are used together corresponding to multiple independent image acquisition devices, a larger depth of field/field of view can be obtained. When the clearly imageable object plane subspaces overlap in a direction parallel to the optical axis of the image acquisition device (for example, longitudinally), a greater depth of field can be obtained through multiple independent image acquisition devices. When the clearly imageable object surface subspaces overlap in a direction perpendicular to the optical axis of the image acquisition device (for example, laterally), a larger field of view can be obtained through multiple independent image acquisition devices. When the clearly imageable object surface subspace overlaps in a direction (such as longitudinal direction) parallel to the optical axis of the image acquisition device, the "object position ” can be the height of the subject. When the clearly imageable object surface subspace overlaps in the direction (such as lateral direction) perpendicular to the optical axis of the image acquisition device, the "object position ” can be the horizontal position of the subject.
可理解的是,可以用多个独立图像采集装置同时进行图像采集(此时多个独立图像采集装置均为预览图像采集装置),根据采集到的图像确定各相机选择单元对应的目标图像采集装置,用目标图像采集装置采集到的图像进行后续处理;也可以用多个独立图像采集装置中的一个作为预览图像采集装置进行图像采集,根据采集到的图像确定各相机选择单元对应的目标图像采集装置,用目标图像采集装置采集到的图像(如果目标采集装置不是预览图像采集装置,则用目标采集装置重新采集)进行后续处理;还可以用多个独立图像采集装置以外的预览图像采集装置进行图像采集,根据采集到的图像确定各相机选择单元对应的目标图像采集装置,用目标图像采集装置进行图像采集,用目标图像采集装置采集到的图像进行后续处理。It can be understood that multiple independent image acquisition devices can be used for image acquisition at the same time (in this case, the multiple independent image acquisition devices are all preview image acquisition devices), and the target image acquisition device corresponding to each camera selection unit is determined according to the acquired images , use the image captured by the target image capture device for subsequent processing; it is also possible to use one of multiple independent image capture devices as a preview image capture device for image capture, and determine the target image capture corresponding to each camera selection unit according to the captured image device, use the image captured by the target image capture device (if the target capture device is not a preview image capture device, then use the target capture device to re-acquire) for subsequent processing; it can also be performed with a preview image capture device other than multiple independent image capture devices Image acquisition: determine the target image acquisition device corresponding to each camera selection unit according to the acquired image, use the target image acquisition device to perform image acquisition, and use the image acquired by the target image acquisition device to perform subsequent processing.
在仅采用多个独立图像采集装置的情况下,可以仅采集双拇指和四指的指纹,这对应图像采集装置的设置方案A3和A2中拇指和四指的应用场景。In the case of using only a plurality of independent image acquisition devices, only the fingerprints of two thumbs and four fingers can be collected, which corresponds to the application scenarios of thumb and four fingers in the configuration schemes A3 and A2 of the image acquisition devices.
在采用多个共同图像采集装置和多个独立图像采集装置的情况下,可以采集双拇指和四指的指纹,还可以采集单指滚动捺印指纹以及掌纹。当采集四指的指纹时,采集掌纹时所采用的共同图像采集装置可以作为采集 四指时的独立图像采集装置来使用。In the case of adopting a plurality of common image acquisition devices and a plurality of independent image acquisition devices, the fingerprints of double thumbs and four fingers can be collected, and the fingerprints and palm prints of single finger rolling can also be collected. When collecting the fingerprints of four fingers, the common image acquisition device adopted when collecting palm prints can be used as a collection Four-finger independent image acquisition device to use.
示例性地,多个图像采集装置包括多个第一独立图像采集装置;多个第一独立图像采集装置的镜头围绕中心线设置且面向手纹采集区,其中,多个第七第一独立图像采集装置分别具有距离其最佳物面前后景深范围内的可清晰成像物面子空间,且多个第一独立图像采集装置的可清晰成像物面子空间部分地重叠,多个第一独立图像采集装置的可清晰成像物面子空间共同形成可清晰成像总空间,手纹采集区包括可清晰成像物面总空间,多个第一独立图像采集装置的最佳物面在手纹采集区内的位置不同;多个第一独立图像采集装置的镜头的焦距不同;和/或,多个第一独立图像采集装置的镜头的前端到手纹采集区的距离不同。Exemplarily, the plurality of image acquisition devices includes a plurality of first independent image acquisition devices; the lenses of the plurality of first independent image acquisition devices are arranged around the center line and face the handprint acquisition area, wherein the plurality of seventh first independent images The acquisition devices respectively have clear imageable object surface subspaces within the range of depth of field from the front and back of the best object, and the clear imageable object surface subspaces of multiple first independent image acquisition devices partially overlap, and the multiple first independent image acquisition devices The clear imageable object surface subspaces together form the clear imageable total space, the handprint collection area includes the clear imageable object surface total space, and the best object planes of multiple first independent image acquisition devices have different positions in the handprint collection area ; the focal lengths of the lenses of the multiple first independent image capture devices are different; and/or, the distances from the front ends of the lenses of the multiple first independent image capture devices to the handprint collection area are different.
第一独立图像采集装置可以是下文描述的第七图像采集装置2220,即多个第一独立图像采集装置可以是下文描述的多个第七图像采集装置2220。The first independent image acquisition device may be the seventh image acquisition device 2220 described below, that is, the plurality of first independent image acquisition devices may be the plurality of seventh image acquisition devices 2220 described below.
预览图像采集装置是多个独立图像采集装置中视野最大的图像采集装置时,其可以是多个独立图像采集装置中焦距最短的图像采集装置。若多个独立图像采集装置的焦距相同,预览图像采集装置可以是多个独立图像采集装置中任一图像采集装置或全部图像采集装置。When the preview image acquisition device is the image acquisition device with the largest field of view among the plurality of independent image acquisition devices, it may be the image acquisition device with the shortest focal length among the plurality of independent image acquisition devices. If the focal lengths of the plurality of independent image acquisition devices are the same, the preview image acquisition device may be any image acquisition device or all image acquisition devices in the plurality of independent image acquisition devices.
根据预览图像采集装置采集的结构光图像和/或照明光图像从多个图像采集装置中确定可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置,包括:According to the structured light image and/or illumination light image collected by the preview image acquisition device, the target image acquisition device that can clearly image the subspace of the object surface and match the position of the photographed object is determined from multiple image acquisition devices, including:
根据预览图像采集装置采集的结构光图像确定拍摄对象的高度;根据拍摄对象高度确定可清晰成像物面子空间与拍摄对象高度相匹配的目标图像采集装置;或者,根据预览图像采集装置采集的结构光图像确定拍摄对象所包含的各手指的高度;根据各手指的高度确定可清晰成像物面子空间与各手指高度相匹配的目标图像采集装置。Determine the height of the subject according to the structured light image collected by the preview image acquisition device; determine the target image acquisition device that can clearly image the subspace of the object surface and match the height of the subject according to the height of the subject; or, according to the structured light collected by the preview image acquisition device The image determines the height of each finger included in the photographed object; according to the height of each finger, a target image acquisition device that can clearly image the subspace of the object surface and matches the height of each finger is determined.
确定拍摄对象的高度时,可以确定整个四指的高度(此时相机选择单元为四指整体),根据四指高度确定一个合适的图像采集装置作为目标图像采集装置。确定拍摄对象的高度时,还可以确定每个手指的高度(此时相机选择单元为单手),为每个手指确定不同的图像采集装置作为目标图像采集装置。When determining the height of the object to be photographed, the height of the entire four fingers can be determined (the camera selection unit is a whole of four fingers at this time), and a suitable image acquisition device is determined as the target image acquisition device according to the height of the four fingers. When determining the height of the object to be photographed, the height of each finger can also be determined (the camera selection unit is a single hand at this time), and a different image acquisition device is determined for each finger as the target image acquisition device.
示例性地,多个图像采集装置包括多个第八图像采集装置,多个第八图像采集装置的镜头位于预定平面内且分别对准手纹采集区内的多个第一子区域,多个第一子区域中的任意相邻的两个彼此重叠或者邻接。Exemplarily, the plurality of image acquisition devices include a plurality of eighth image acquisition devices, the lenses of the plurality of eighth image acquisition devices are located in a predetermined plane and are respectively aimed at the plurality of first sub-regions in the handprint acquisition area, and the plurality of Any adjacent two of the first subregions overlap or adjoin each other.
预览图像采集装置是多个独立图像采集装置中视野最大的图像采集装置时,若多个独立图像采集装置的视野相同,预览图像采集装置可以是多个独立图像采集装置中任一图像采集装置或全部图像采集装置。When the preview image acquisition device is the image acquisition device with the largest field of view among the multiple independent image acquisition devices, if the multiple independent image acquisition devices have the same field of view, the preview image acquisition device can be any one of the multiple independent image acquisition devices or All image capture devices.
根据预览图像采集装置采集的结构光图像和/或照明光图像从多个图像采集装置中确定可清晰成像物面子空间与拍摄对象位置相匹配的目标图 像采集装置,包括:According to the structured light image and/or illumination light image collected by the preview image acquisition device, determine the target map that can clearly image the subspace of the object surface and match the position of the shooting object from multiple image acquisition devices Like acquisition devices, including:
根据预览图像采集装置采集的图像确定拍摄对象(例如双拇指整体或四指整体)的水平位置;根据拍摄对象水平位置确定可清晰成像物面子空间与拍摄对象水平位置相匹配的目标图像采集装置;Determine the horizontal position of the subject (such as the whole of the two thumbs or the whole of the four fingers) according to the image collected by the preview image acquisition device; determine the target image acquisition device that can clearly image the subspace of the object surface and match the horizontal position of the subject according to the horizontal position of the subject;
或者,根据预览图像采集装置采集的图像确定拍摄对象所包含的各手指的水平位置;根据各手指的水平位置确定可清晰成像物面子空间与各手指水平位置相匹配的目标图像采集装置。Or, determine the horizontal position of each finger contained in the subject according to the image collected by the preview image acquisition device; determine the target image acquisition device that can clearly image the subspace of the object surface and match the horizontal position of each finger according to the horizontal position of each finger.
例如,根据从各独立图像采集装置拍摄的图像中确定各手指在图像中的位置、姿态、完整性,例如小指只被一个图像采集装置采集,则将该图像采集装置作为小指的目标图像采集装置;中指被两个图像采集装置采集到,则选择两个图像采集装置所拍摄的图像中中指位置更居中、拍摄到的中指完整性更好或中指姿态更好的图像采集装置作为目标图像采集装置。For example, according to determining the position, posture and integrity of each finger in the image from the images taken by each independent image acquisition device, for example, the little finger is only captured by one image acquisition device, then this image acquisition device is used as the target image acquisition device for the little finger If the middle finger is captured by two image acquisition devices, then select the image acquisition device whose middle finger position is more centered in the images taken by the two image acquisition devices, the integrity of the middle finger captured or the middle finger posture is better as the target image acquisition device .
确定拍摄对象的水平位置时,可以确定整个四指的水平位置,根据四指的水平位置确定一个合适的图像采集装置作为目标图像采集装置。确定拍摄对象的水平位置时,还可以确定每个手指的水平位置,为每个手指确定不同的图像采集装置作为目标图像采集装置。在图像处理领域,在针对三维目标的不同部位采集图像之后,为了获取到完整的目标图像,一般需要对获取的多个不同角度下的图像进行对齐拼接。现有技术中,在对齐拼接时缺乏两个图像之间的对齐基准,通常是直接将两个图像叠加拼接在一起。这种拼接效果不好,影响后续的目标识别(例如人脸识别或指纹识别)的准确度。When determining the horizontal position of the shooting object, the horizontal position of the entire four fingers can be determined, and a suitable image acquisition device is determined as the target image acquisition device according to the horizontal position of the four fingers. When determining the horizontal position of the shooting object, the horizontal position of each finger may also be determined, and a different image acquisition device is determined for each finger as the target image acquisition device. In the field of image processing, after collecting images for different parts of a three-dimensional target, in order to obtain a complete target image, it is generally necessary to align and stitch multiple acquired images from different angles. In the prior art, there is a lack of an alignment reference between two images during alignment and stitching, and usually the two images are directly superimposed and stitched together. This splicing effect is not good, which affects the accuracy of subsequent target recognition (such as face recognition or fingerprint recognition).
为了至少部分地解决上述技术问题,本申请实施例提供一种结构光投射器和一种图像采集系统。该结构光投射器发出的结构光可以形成多种不同的结构光图案。应用该结构光投射器的图像采集系统,可以将不同的结构光图案投射在目标物体(即本文描述的三维目标)表面,并可以基于不同的结构光图案获取目标物体的二维目标图像与对应待拼接数据(例如二维展开图像或三维目标模型)之间的变换关系以及待拼接数据之间的精准对齐关系,并基于该精准对齐关系进行待拼接数据间的拼接,自动获取所需要的整体数据(例如多个二维展开图像拼接后的整体展开图像或多个三维目标模型拼接后的整体目标模型)。该系统无需人工过多干预即可获取高精度的整体数据。In order to at least partly solve the above technical problems, embodiments of the present application provide a structured light projector and an image acquisition system. The structured light emitted by the structured light projector can form a variety of different structured light patterns. Using the image acquisition system of the structured light projector, different structured light patterns can be projected on the surface of the target object (that is, the three-dimensional target described in this paper), and the two-dimensional target image of the target object can be obtained based on different structured light patterns. The conversion relationship between the data to be spliced (such as a 2D expanded image or a 3D target model) and the precise alignment relationship between the data to be spliced, and based on the precise alignment relationship, the splicing between the data to be spliced is automatically obtained. Data (for example, an overall expanded image after stitching multiple 2D expanded images or an overall target model after stitching multiple 3D target models). The system can obtain high-precision overall data without much human intervention.
虽然本文在描述根据本申请实施例的图像采集系统时主要以指纹为例进行描述,但是可以理解,根据本申请实施例的图像采集系统并不局限于此,其可以应用于针对任意类型的目标的图像采集,该目标包括但不限于指纹、掌纹、人脸等。在一个或一些实施例中,图像采集系统可以为手纹采集系统,用于采集指纹和/或掌纹等。本文描述的“手纹采集系统”还可以称为“手纹采集装置/设备”或“非接触式手纹采集装置/设备/系统”等。Although this paper mainly uses fingerprints as an example to describe the image acquisition system according to the embodiment of the application, it can be understood that the image acquisition system according to the embodiment of the application is not limited thereto, and it can be applied to any type of target Image collection, the target includes but not limited to fingerprints, palm prints, faces, etc. In one or some embodiments, the image collection system may be a handprint collection system for collecting fingerprints and/or palmprints and the like. The "handprint collection system" described herein may also be referred to as "handprint collection device/equipment" or "non-contact handprint collection device/equipment/system", etc.
根据本申请一个方面,提供一种结构光投射器。该结构光投射器包括 光源、图案产生单元和会聚透镜,图案产生单元设置在光源的前方,光源用于将图案产生单元上的图案投影到投射平面上,会聚透镜设置在图案产生单元和投射平面之间的光传输路径上;结构光投射器发出的结构光光束在投射平面形成重建结构光图案和拼接结构光图案,重建结构光图案和拼接结构光图案不同,重建结构光图案中的重建结构光单元和拼接结构光图案中的拼接结构光单元之间不存在边界重叠。According to one aspect of the present application, a structured light projector is provided. The structured light projector includes A light source, a pattern generating unit and a converging lens, the pattern generating unit is arranged in front of the light source, the light source is used to project the pattern on the pattern generating unit onto the projection plane, and the converging lens is arranged on the light transmission path between the pattern generating unit and the projection plane Above; the structured light beam emitted by the structured light projector forms a reconstructed structured light pattern and a spliced structured light pattern on the projection plane. The reconstructed structured light pattern is different from the spliced structured light pattern. The reconstructed structured light unit and the spliced structured light There is no border overlap between the tiled structured light units in the pattern.
下面参考图1a-1d进行详细说明。图1a示出了根据本申请一个实施例的结构光投射器100的示意性结构图。A detailed description is given below with reference to FIGS. 1a-1d. Fig. 1a shows a schematic structure diagram of a structured light projector 100 according to an embodiment of the present application.
示例性地,首先参考图1a,该结构光投射器100依次包括光源110、图案产生单元120和会聚透镜130。投射平面140在会聚透镜130的前方。如图1a所示,光源110可以将图案产生单元120上的图案经会聚透镜130投影到投射平面140上。并且,结构光投射器100发出的结构光光束最终在投射平面140上形成特定的结构光图案。Exemplarily, first referring to FIG. 1 a , the structured light projector 100 includes a light source 110 , a pattern generating unit 120 and a converging lens 130 in sequence. The projection plane 140 is in front of the converging lens 130 . As shown in FIG. 1 a , the light source 110 can project the pattern on the pattern generating unit 120 onto the projection plane 140 through the converging lens 130 . Moreover, the structured light beam emitted by the structured light projector 100 finally forms a specific structured light pattern on the projection plane 140 .
示例性地,光源110可以是点光源,例如可以包括单个的发光元件。发光元件可以是发光二极管(LED)或发射激光的激光二极管等。发光元件可以发射波长范围从大约850nm到大约940nm的激光,或者可以发射具有近红外波段或可见光波段的光。在一个示例中,发光元件可以发射波长为660nm的红光作为结构光。发光元件发射的光的波长不限于特定波长。光源110也可以包括发光元件和其他组合件,例如包括灯罩、灯板和能够发射结构光的激光发射器。Exemplarily, the light source 110 may be a point light source, for example, may include a single light emitting element. The light emitting element may be a light emitting diode (LED) or a laser diode that emits laser light, or the like. The light emitting element may emit laser light having a wavelength ranging from about 850 nm to about 940 nm, or may emit light having a near-infrared band or a visible band. In one example, the light emitting element may emit red light with a wavelength of 660 nm as structured light. The wavelength of light emitted by the light emitting element is not limited to a specific wavelength. The light source 110 may also include a light emitting element and other components, such as a lampshade, a lamp panel, and a laser emitter capable of emitting structured light.
图案产生单元120设置在光源110的前方。图案产生单元120可以产生特定的图案。光源110发射的光照射到图案产生单元120上后,一部分被阻挡或改变透射方向而另一部分则透过图案产生单元120按照预设的角度发射出去,从而在透射平面140上形成具有期望图案的光斑。The pattern generating unit 120 is disposed in front of the light source 110 . The pattern generation unit 120 may generate a specific pattern. After the light emitted by the light source 110 is irradiated on the pattern generating unit 120, a part of it is blocked or the transmission direction is changed, while the other part is transmitted through the pattern generating unit 120 according to a preset angle, thereby forming a desired pattern on the transmission plane 140. spot.
会聚透镜130设置在图案产生单元120和投射平面140之间的光传输路径上。投射平面140可以视作是理想的平面,即未放置目标物体时结构光所能够投射到的平面。可以理解,当将目标物体(例如手指)放置在投射平面140处时,结构光投射器100发出的结构光照射在目标物体上,使得所形成的结构光图案发生一定的光学形态畸变。经图案产生单元120透过的光经过会聚透镜130会聚后到达投射平面140上。会聚透镜130可以包括各种类型的凸透镜、凹透镜或其他形式的透镜、或者多种透镜的组合。本申请不对会聚透镜130做限制,只要会聚透镜130整体对光具有会聚作用,可以将经过图案产生单元120的光会聚之后在投射平面140上形成清晰的图案即可。The condensing lens 130 is disposed on a light transmission path between the pattern generating unit 120 and the projection plane 140 . The projection plane 140 can be regarded as an ideal plane, that is, a plane on which the structured light can be projected when no target object is placed. It can be understood that when a target object (such as a finger) is placed on the projection plane 140 , the structured light emitted by the structured light projector 100 is irradiated on the target object, causing a certain optical distortion in the formed structured light pattern. The light passing through the pattern generating unit 120 is converged by the converging lens 130 and reaches the projection plane 140 . The converging lens 130 may include various types of convex lenses, concave lenses or other forms of lenses, or a combination of various lenses. The application is not limited to the converging lens 130 , as long as the converging lens 130 as a whole has a converging effect on light, and can form a clear pattern on the projection plane 140 after converging the light passing through the pattern generating unit 120 .
光源110、图案产生单元120、会聚透镜130,两两之间在光的传输路径上具有一定的距离。可以将光源110和图案产生单元120之间的距离设为第一预设距离,将图案产生单元120和会聚透镜130之间的距离设为第二预设距离。第一预设距离和第二预设距离可以是任何合适的距离,两者 可以相等也可以不相等。图案产生单元120和会聚透镜130也可以按照预设的角度进行摆放设置。例如,可以将图案产生单元120所在的平面相对于会聚透镜130的光轴成预设角度摆放。该预设角度可以是任何合适的角度,例如90°或者45°等。图案产生单元120所在的平面可以是图案产生单元120的纵剖面所在的平面。可以理解,当图案产生单元120的厚度较薄,例如其为光栅时,图案产生单元120基本上与其纵剖面在同一平面上。There is a certain distance between the light source 110, the pattern generating unit 120, and the converging lens 130 on the light transmission path. The distance between the light source 110 and the pattern generating unit 120 may be set as a first preset distance, and the distance between the pattern generating unit 120 and the converging lens 130 may be set as a second preset distance. The first preset distance and the second preset distance can be any suitable distance, both May or may not be equal. The pattern generating unit 120 and the converging lens 130 can also be arranged according to a preset angle. For example, the plane where the pattern generating unit 120 is located can be placed at a preset angle relative to the optical axis of the converging lens 130 . The preset angle may be any suitable angle, such as 90° or 45°. The plane where the pattern generating unit 120 is located may be a plane where a longitudinal section of the pattern generating unit 120 is located. It can be understood that when the thickness of the pattern generating unit 120 is relatively thin, for example, when it is a grating, the pattern generating unit 120 is basically on the same plane as its longitudinal section.
本申请在此不对第一预设距离、第二预设距离和预设角度进行限制,只要设置之后的结构光投射器100可以在投射平面140上形成特定的结构光图案即可。The present application does not limit the first preset distance, the second preset distance and the preset angle here, as long as the configured structured light projector 100 can form a specific structured light pattern on the projection plane 140 .
示例性地,结构光投射器100在投射平面140上形成的结构光图案包括重建结构光图案和拼接结构光图案。参考图1b-1d,示出了根据本申请实施例的结构光图案的示意图。其中,图1b可以视作图1a中的结构光投射器100发出的结构光光束在投射平面140上形成的结构光图案,该结构光图案由重建结构光图案和拼接结构光图案组成。此外,图1c示出了和图1b中的结构光图案对应的重建结构光图案150,图1d示出了和图1b中的结构光图案对应的拼接结构光图案160。Exemplarily, the structured light pattern formed by the structured light projector 100 on the projection plane 140 includes a reconstructed structured light pattern and a spliced structured light pattern. Referring to Figs. 1b-1d, schematic diagrams of structured light patterns according to embodiments of the present application are shown. 1b can be regarded as a structured light pattern formed by the structured light beam emitted by the structured light projector 100 in FIG. 1a on the projection plane 140, the structured light pattern is composed of a reconstructed structured light pattern and a stitched structured light pattern. Furthermore, FIG. 1c shows a reconstructed structured light pattern 150 corresponding to the structured light pattern in FIG. 1b , and FIG. 1d shows a mosaic structured light pattern 160 corresponding to the structured light pattern in FIG. 1b .
在本申请实施例中,重建结构光图案和拼接结构光图案不同。示例性地,重建结构光图案和拼接结构光图案不同可以包括二者的图案形态不同。示例性地,图1c示出的重建结构光图案150和图1d示出的拼接结构光图案160的形态不同。重建结构光图案150由多个形态相似的重建结构光单元151组成,而拼接结构光图案160则由多个与重建结构光单元151形态不同的拼接结构光单元161组成。可以理解,图1c-1d中示出的重建结构光单元151或拼接结构光单元161仅是多个中的一个。在重建结构光图案中,重建结构光单元的形态可以是一种,也可以是多种。同样的,在拼接结构光图案中,拼接结构光单元的形态可以是一种,也可以是多种。In the embodiment of the present application, the reconstructed structured light pattern is different from the spliced structured light pattern. Exemplarily, the difference between the reconstructed structured light pattern and the spliced structured light pattern may include that the two patterns are different in shape. Exemplarily, the reconstructed structured light pattern 150 shown in FIG. 1c is different from the mosaic structured light pattern 160 shown in FIG. 1d. The reconstructed structured light pattern 150 is composed of multiple reconstructed structured light units 151 with similar shapes, while the stitched structured light pattern 160 is composed of multiple stitched structured light units 161 with different shapes from the reconstructed structured light units 151 . It can be understood that the reconstructed structured light unit 151 or the spliced structured light unit 161 shown in FIGS. 1c-1d is only one of many. In reconstructing the structured light pattern, the reconstructed structured light unit may have one form or multiple forms. Similarly, in the mosaic structured light pattern, the form of the mosaic structured light unit may be one or multiple.
示例性地,除图案形态不同以外,重建结构光图案和拼接结构光图案不同可以进一步包括二者的用途不同。将本申请实施例的结构光投射器100用于针对目标物体的图像采集时,重建结构光图案可以用于辅助确定目标物体在三维空间中的坐标与在采集到的二维目标图像中的坐标之间的变换关系,拼接结构光图案则可以用于针对同一目标物体的多个待拼接数据(可以是二维展开图像或三维目标模型)之间的对齐以辅助对待拼接数据进行精准拼接,进而获取高精度的整体数据。Exemplarily, in addition to being different in pattern form, the difference between the reconstructed structured light pattern and the spliced structured light pattern may further include different uses of the two. When the structured light projector 100 of the embodiment of the present application is used for image acquisition of a target object, the reconstructed structured light pattern can be used to assist in determining the coordinates of the target object in three-dimensional space and the coordinates in the collected two-dimensional target image The transformation relationship between them, the spliced structured light pattern can be used for the alignment between multiple data to be spliced (which can be two-dimensional unfolded images or three-dimensional target models) for the same target object to assist in the precise splicing of the data to be spliced, and then Acquire high-precision overall data.
示例性地,重建结构光图案中的重建结构光单元和拼接结构光图案中的拼接结构光单元之间不存在边界重叠。示例性,参考图1b,条纹形态的重建结构光单元152、方形的拼接结构光单元162、圆形的拼接结构光单元163共同构成了该图中的结构光图案。在图1b中,每个条纹形态的重建结构光单元边界可以视作矩形的上下两条边,方形的拼接结构光单元162、圆形的拼接结构光单元163各自与条纹形态的重建结构光单元152的上下 两条边均不存在交集。Exemplarily, there is no border overlap between the reconstructed structured light unit in the reconstructed structured light pattern and the stitched structured light unit in the stitched structured light pattern. For example, referring to FIG. 1 b , the reconstructed structured light unit 152 in the form of stripes, the square stitched structured light unit 162 , and the circular stitched structured light unit 163 together constitute the structured light pattern in the figure. In Figure 1b, the boundary of each reconstructed structured light unit in stripe form can be regarded as the upper and lower sides of a rectangle, and the square stitched structured light unit 162 and the circular stitched structured light unit 163 are respectively connected with the reconstructed structured light unit in stripe form. 152 up and down No two edges intersect.
重建结构光图案和拼接结构光图案的具体作用可以参考下文描述的“图像拼接方法”进行理解。The specific functions of reconstructing the structured light pattern and stitching the structured light pattern can be understood with reference to the "image stitching method" described below.
根据本申请的结构光投射器,由光源发出的结构光经过图案产生单元和会聚透镜投射在目标物体表面,可以形成不同的重建图像图案和拼接图像图案。特定的重建图像图案可以用于确定目标物体的二维目标图像与对应的待拼接数据之间的变换关系,拼接图像图案则可以用于实现待拼接数据之间的对齐配准,因此采用多种不同的结构光图案有助于拼接获得精度更高的整体数据。According to the structured light projector of the present application, the structured light emitted by the light source is projected on the surface of the target object through the pattern generating unit and the converging lens to form different reconstructed image patterns and stitched image patterns. A specific reconstructed image pattern can be used to determine the transformation relationship between the 2D target image of the target object and the corresponding data to be stitched, and the stitched image pattern can be used to achieve alignment between the data to be stitched, so a variety of Different structured light patterns help stitching to obtain more accurate overall data.
示例性地,图案产生单元120包括衍射光学元件或者光栅。Exemplarily, the pattern generating unit 120 includes a diffractive optical element or a grating.
示例性地,图案产生单元120可以包括任何现有的或将来可能出现的能够产生特定结构光图案的元件或者元件组。在一个示例中,图案产生单元可以是衍射光学元件(Diffractive Optical Element,DOE)。DOE通常采用微纳刻蚀工艺构成二维分布的衍射单元,每个衍射单元可以有特定的形貌、折射率等,因此可以对诸如激光的结构光光波前位相分布进行精细调控。激光经过每个衍射单元后发生衍射,并在一定距离处产生干涉,形成特定的光强分布,特定的光强分布投影在投射平面上可以形成特定的结构光图案。示例性而非限制性地,可以采用光束整形用DOE(即用于光束整形的DOE)产生正方形、多边形、长条形、环形及圆形的光斑图案。此外,也可以采用衍射分束器,产生一维或二维的阵列光束从而可以在目标物体表面形成规律的散点阵列。Exemplarily, the pattern generation unit 120 may include any existing or future elements or element groups capable of generating specific structured light patterns. In one example, the pattern generating unit may be a diffractive optical element (Diffractive Optical Element, DOE). DOE usually uses a micro-nano etching process to form two-dimensionally distributed diffraction units. Each diffraction unit can have a specific shape, refractive index, etc., so it can fine-tune the phase distribution of structured light such as laser light. The laser light diffracts after passing through each diffraction unit, and interferes at a certain distance to form a specific light intensity distribution, which can form a specific structured light pattern when projected on the projection plane. By way of example and not limitation, a DOE for beam shaping (ie, a DOE for beam shaping) can be used to generate square, polygonal, strip-shaped, ring-shaped, and circular spot patterns. In addition, a diffractive beam splitter can also be used to generate a one-dimensional or two-dimensional array beam so as to form a regular array of scattered points on the surface of the target object.
在另一个示例中,图案产生单元120可以是光栅,例如衍射光栅。衍射光栅通过有规律的结构,可以使入射光的振幅或相位(或两者同时)受到周期性空间调制。光栅具有图案,光源发射的光照射到光栅上后,一部分被阻挡而另一部分则透过光栅发射出去,从而在目标物体表面上形成具有期望图案的光斑。In another example, the pattern generating unit 120 may be a grating, such as a diffraction grating. Diffraction gratings can periodically modulate the amplitude or phase (or both) of incident light through regular structures. The grating has a pattern. When the light emitted by the light source hits the grating, part of it is blocked and the other part is emitted through the grating, thus forming a light spot with a desired pattern on the surface of the target object.
示例性地,光栅可以采用多种合适的材料制作。示例性地,光栅可以采用透光性较好的材质,例如以玻璃、塑料为基底的材质。光源发出的结构光通过光栅时会发生衍射,从而具有一定的透射和发散效果。Exemplarily, the grating can be made of various suitable materials. Exemplarily, the grating may be made of a material with better light transmission, such as a material based on glass or plastic. When the structured light emitted by the light source passes through the grating, it will diffract, so it has certain transmission and divergence effects.
在一个示例中,光栅可以包括胶片,胶片上形成有重建结构光图案和拼接结构光图案。示例性地,可以将预设的图案按照一定比例缩放为合适的尺寸之后,采用例如喷涂或印刷等方式形成在胶片上。示例性地,再次参考图1b,如果想要在投射平面上形成如图1b示出的图案,则可以通过在胶片上喷涂或印刷与图1b示出的图案相对应的重建结构光图案和拼接结构光图案来实现。本领域技术人员容易理解该技术方案,在此不再赘述。对于光栅包括胶片的方案,用户可以根据需要设定任意想要的重建结构光图案和拼接结构光图案,因此可以满足用户的个性化需求。In one example, the grating may include a film on which the reconstructed structured light pattern and the stitched structured light pattern are formed. Exemplarily, the preset pattern may be scaled to a suitable size according to a certain ratio, and then formed on the film by means such as spraying or printing. Exemplarily, referring to FIG. 1b again, if one wants to form a pattern as shown in FIG. Structured light pattern to achieve. Those skilled in the art can easily understand this technical solution, so it will not be repeated here. For the solution where the grating includes film, users can set any desired reconstructed structured light pattern and spliced structured light pattern according to their needs, so it can meet the personalized needs of users.
示例性地,重建结构光单元和拼接结构光单元可以根据需要设置为多 种形式。示例性地,重建结构光单元包括条纹;和/或,拼接结构光单元包括散点。在重建结构光单元包括条纹的示例中,条纹可以是横条纹,也可以是竖条纹,还可以是斜条纹。在拼接结构光单元包括散点的示例中,散点可以包括多种形式,例如圆形散点、方形散点、三角形散点、十字形散点等。在同一个拼接结构光图案中还可以包括多种形态的拼接结构光单元的组合,例如方形散点和十字形散点的组合。例如,图1b示出的拼接结构光单元包括方形散点和圆形散点。Exemplarily, the reconstructed structured light unit and the spliced structured light unit can be set to multiple kind of form. Exemplarily, the reconstructed structured light unit includes stripes; and/or, the spliced structured light unit includes scattered points. In an example where the reconstructed structured light unit includes stripes, the stripes may be horizontal stripes, vertical stripes, or oblique stripes. In an example where the spliced structured light unit includes scatter points, the scatter points may include various forms, such as circular scatter points, square scatter points, triangular scatter points, cross-shaped scatter points, and the like. The same spliced structured light pattern may also include a combination of multiple forms of spliced structured light units, such as a combination of square scatter points and cross-shaped scatter points. For example, the spliced structured light unit shown in FIG. 1 b includes square scatter points and circular scatter points.
示例性地,在一个重建结构光图案中,每个重建结构光单元可以按照固定的规律排布。例如,可以设置为等宽的条纹等间距排布,也可以设置为宽度不完全相同的条纹按照不完全相同的间距排布。例如,图1b示出的重建结构光图案为条纹的组合,并且每个条纹的宽度不完全相同,且相邻两个条纹之间的间距也不完全相同。类似地,在拼接结构光图案中的拼接结构光单元也可以按照固定的规律排布。Exemplarily, in a reconstructed structured light pattern, each reconstructed structured light unit may be arranged according to a fixed rule. For example, stripes of equal width may be arranged at equal intervals, or stripes with different widths may be arranged at different intervals. For example, the reconstructed structured light pattern shown in FIG. 1b is a combination of stripes, and the width of each stripe is not completely the same, and the distance between two adjacent stripes is also not completely the same. Similarly, the spliced structured light units in the spliced structured light pattern can also be arranged according to a fixed rule.
可以理解,重建结构光图案中的重建结构光单元和拼接结构光图案中的拼接结构光单元的形态和排布规律是由图案产生单元120决定的。对于图案产生单元120是DOE的示例,DOE在结构光投射在投射平面的光传播路径中的位置、角度以及DOE表面的微观结构均可以影响投射在投射平面上的重建结构光图案和拼接结构光图案。It can be understood that the shape and arrangement of the reconstructed structured light units in the reconstructed structured light pattern and the spliced structured light units in the spliced structured light pattern are determined by the pattern generation unit 120 . For the example where the pattern generation unit 120 is a DOE, the position and angle of the DOE in the light propagation path projected on the projection plane by the DOE, as well as the microstructure of the DOE surface can affect the reconstructed structured light pattern and the stitched structured light projected on the projection plane. pattern.
条纹具有一定的宽度和长度,方便识别和定位,也就便于基于其确定目标物体的二维目标图像与对应的待拼接数据之间的变换关系。而散点分布较为密集,数量较多,方便对其进行匹配,进而便于对待拼接数据进行对齐。The stripes have a certain width and length, which is convenient for identification and positioning, and it is also convenient to determine the transformation relationship between the two-dimensional target image of the target object and the corresponding data to be spliced based on it. The distribution of scattered points is relatively dense and the number is large, which is convenient for matching, and then facilitates the alignment of the data to be spliced.
根据本申请实施例,拼接结构光单元的分布密度大于重建结构光单元的分布密度。According to the embodiment of the present application, the distribution density of the spliced structured light units is greater than the distribution density of the reconstructed structured light units.
重建结构光单元的分布密度可以用重建结构光单元上的标识部分(该标识部分为重建结构光单元的至少一部分)的分布密度来表示。例如,重建结构光单元为条纹的情况下,可以用各条纹的中线的分布密度表示条纹的分布密度。类似地,拼接结构光单元的分布密度可以用拼接结构光单元上的标识部分(该标识部分为拼接结构光单元的至少一部分)的分布密度来表示。例如,拼接结构光单元为散点的情况下,可以用各散点(其通常具有一定面积)上的中心点的分布密度表示散点的分布密度。The distribution density of the reconstructed structured light unit may be represented by the distribution density of the marking part on the reconstructed structured light unit (the marking part is at least a part of the reconstructed structured light unit). For example, in the case where the reconstructed structured light unit is a fringe, the distribution density of the midline of each fringe may be used to represent the distribution density of the fringes. Similarly, the distribution density of the spliced structured light unit may be represented by the distribution density of the marking portion on the spliced structured light unit (the marking portion is at least a part of the spliced structured light unit). For example, when the spliced structured light unit is a scatter point, the distribution density of the central points on each scatter point (which usually has a certain area) may be used to represent the distribution density of the scatter points.
拼接结构光单元的分布密度不能过小,否则无法获得足够的拼接精度;拼接结构光单元的分布密度也不能过大,如果分布密度大于形变误差,容易错误配对。例如,参考图1b,散点的分布密度大于条纹的分布密度。The distribution density of the spliced structured light units cannot be too small, otherwise sufficient splicing accuracy cannot be obtained; the distribution density of the spliced structured light units should not be too large, if the distribution density is greater than the deformation error, it is easy to be wrongly matched. For example, referring to FIG. 1b, the distribution density of scattered points is greater than that of stripes.
根据本申请另一个方面,还提供一种图像采集系统。该图像采集系统包括结构光投射器、照明系统、一个或多个图像采集装置和处理装置。照明系统包括一个或多个光源,每个光源用于朝向目标采集区发光,目标采集区用于放置三维目标。一个或多个图像采集装置用于在照明系统向目标 采集区发光的同时采集目标采集区的图像以获得目标图像。处理装置用于对一个或多个图像采集装置采集的目标图像进行处理。According to another aspect of the present application, an image acquisition system is also provided. The image acquisition system includes a structured light projector, an illumination system, one or more image acquisition devices and a processing device. The lighting system includes one or more light sources, each light source is used to emit light toward a target collection area, and the target collection area is used to place a three-dimensional target. One or more image acquisition devices are used in the illumination system to target The image of the target acquisition area is acquired while the acquisition area emits light to obtain the target image. The processing device is used for processing the target image collected by one or more image collecting devices.
在本申请实施例中,三维目标可以是例如人体整体或者人体的一部分,诸如人脸、手指、手掌等的人体局部,抑或是其他需要采集的具有三维结构的物体。应理解,根据本申请实施例的图像采集系统可以是针对任何合适的具有三维形态的目标物体的图像采集系统,例如人体、人体局部、三维静物或者动物等,在此不对其进行限制。在一个或一些实施例中,目标采集区为手纹采集区,三维目标为用户的手的部分或全部。为了简便,下面以采集手指的指纹的手纹采集系统为例来进行说明。In the embodiment of the present application, the three-dimensional target may be, for example, the whole human body or a part of the human body, such as human face, fingers, palms, etc., or other objects with three-dimensional structures that need to be collected. It should be understood that the image acquisition system according to the embodiment of the present application may be an image acquisition system for any suitable target object with a three-dimensional shape, such as a human body, a human body part, a three-dimensional still life or an animal, etc., which is not limited here. In one or some embodiments, the target collection area is a handprint collection area, and the three-dimensional target is part or all of the user's hand. For the sake of simplification, the following uses a handprint collection system for collecting fingerprints of fingers as an example for illustration.
参考图2,图2示出根据本申请一个实施例的图像采集系统200及相关的目标物体400和目标采集区300的示意图。可以理解,图2为针对图像采集系统的主视效果图。图2所示的图像采集系统200为手纹采集系统200,目标物体400为手指400,目标采集区300为手纹采集区300。Referring to FIG. 2 , FIG. 2 shows a schematic diagram of an image acquisition system 200 and related target objects 400 and target acquisition areas 300 according to an embodiment of the present application. It can be understood that FIG. 2 is a main view effect diagram for the image acquisition system. The image collection system 200 shown in FIG. 2 is a handprint collection system 200 , the target object 400 is a finger 400 , and the target collection area 300 is a handprint collection area 300 .
图2所示的手纹采集系统200的结构仅是示例而非对本申请的限制,根据本申请实施例的手纹采集系统200并不局限于图2所示的情况。在图2示出的手纹采集系统200中,结构光投射器100的数目为1个,图像采集装置的数目为3个,照明系统中的光源数目为6个。应理解,上述数目仅是示例,其可以为任何合适的数字,本申请实施例不对其进行限制。又例如,图2示出的三个图像采集装置212、214、216的光轴位于同一平面内,且分别从三个角度朝向目标采集区300。其中位于中间的图像采集装置214的光轴与目标采集区300的中轴线302基本重合,而位于两侧的图像采集装置212及216的光轴则与目标采集区300的中轴线302呈一定夹角。当然,三个图像采集装置212、214、216之间、及其各自与目标采集区300之间的相对距离和夹角均可以根据需要进行设定,在此不做限制。The structure of the handprint collection system 200 shown in FIG. 2 is only an example and not a limitation to the application, and the handprint collection system 200 according to the embodiment of the application is not limited to the situation shown in FIG. 2 . In the handprint collection system 200 shown in FIG. 2 , there is one structured light projector 100 , three image collection devices, and six light sources in the lighting system. It should be understood that the above number is only an example, and it may be any suitable number, which is not limited in this embodiment of the present application. As another example, the optical axes of the three image acquisition devices 212 , 214 , and 216 shown in FIG. 2 are located in the same plane, and they are respectively facing the target acquisition area 300 from three angles. The optical axis of the image acquisition device 214 located in the middle basically coincides with the central axis 302 of the target acquisition area 300, while the optical axes of the image acquisition devices 212 and 216 on both sides are in a certain clip with the central axis 302 of the target acquisition area 300. horn. Of course, the relative distances and included angles between the three image acquisition devices 212 , 214 , 216 , and between each of them and the target acquisition area 300 can be set as required, and there is no limitation here.
可以理解的是,相比于不用于编码、只用于照明的照明光源,结构光投射器发射用于编码的结构光。It can be understood that, compared to an illumination light source that is not used for coding but only for illumination, the structured light projector emits structured light for coding.
示例性而非限制性地,图2中示出的结构光投射器100与中间的图像采集装置214可以位于不同的平面以避免互相干扰。结构光投射器100也可以与图像采集装置214位于同一平面。实际上,结构光投射器100可以根据需要设置在任何合适的位置,只要不妨碍图像采集装置成像且能够照射到手纹采集区即可。For example and not limitation, the structured light projector 100 shown in FIG. 2 and the image acquisition device 214 in the middle may be located on different planes to avoid mutual interference. The structured light projector 100 may also be located on the same plane as the image acquisition device 214 . In fact, the structured light projector 100 can be arranged at any suitable position as required, as long as it does not hinder the imaging of the image acquisition device and can illuminate the handprint acquisition area.
根据本申请实施例的图像采集系统,结构光投射器100发出的结构光投射在目标物体表面可以形成与重建结构光图案对应的重建图像图案和与拼接结构光图案对应的拼接图像图案。参考上文描述可知,重建图像图案可以用于确定目标物体的二维目标图像与对应的待拼接数据之间的变换关系,拼接图像图案可以用于实现待拼接数据之间的对齐配准。因此,利用包含结构光投射器100的图像采集系统,可以实现待拼接数据之间的对齐配准,有助于拼接获得精度更高的整体数据。 According to the image acquisition system of the embodiment of the present application, the structured light emitted by the structured light projector 100 is projected on the surface of the target object to form a reconstructed image pattern corresponding to the reconstructed structured light pattern and a stitched image pattern corresponding to the stitched structured light pattern. Referring to the above description, it can be seen that the reconstructed image pattern can be used to determine the transformation relationship between the two-dimensional target image of the target object and the corresponding data to be stitched, and the stitched image pattern can be used to achieve alignment between the data to be stitched. Therefore, by using the image acquisition system including the structured light projector 100 , alignment and registration between the data to be spliced can be realized, which is helpful for splicing to obtain overall data with higher precision.
在图像处理领域,在针对三维目标的不同部位采集图像之后,为了获得三维目标的较完整信息,需要将针对不同部位采集的图像拼接到一起。此时,会存在一定的拼接问题。下面以指纹识别为例进行描述。In the field of image processing, after collecting images for different parts of a 3D object, in order to obtain more complete information of the 3D object, it is necessary to stitch together the images collected for different parts. At this time, there will be certain splicing problems. The fingerprint identification is taken as an example for description below.
在采用非接触采集方式获取滚动捺印指纹的过程中,需要将多个相机采集的图像拼接起来,最终得到一张模拟滚动捺印的指纹图像。现有技术中,通常将多个相机采集的多张图像直接叠加。但是,由于相机在标定过程中可能存在标定误差,在图像采集过程中又由于相机和手指微动等诸多因素,将多个相机在不用角度采集的图像直接叠加获得的图像很可能存在误差。图3示出现有技术中将多个指纹图像直接叠加拼接在一起所获得的拼接图像的示意图。参见图3,可以看出在图像拼接处存在较为明显的拼缝。这种图像将直接影响后续的指纹识别,降低识别的准确度。In the process of obtaining rolling fingerprints by non-contact acquisition, it is necessary to splice images collected by multiple cameras to finally obtain a fingerprint image that simulates rolling fingerprints. In the prior art, usually multiple images collected by multiple cameras are directly superimposed. However, due to the possible calibration errors of the camera during the calibration process, and due to many factors such as camera and finger movement during the image acquisition process, there may be errors in the image obtained by directly superimposing the images collected by multiple cameras at different angles. FIG. 3 shows a schematic diagram of a mosaic image obtained by directly superimposing and stitching multiple fingerprint images together in the prior art. Referring to Figure 3, it can be seen that there are obvious seams at the image splicing. Such images will directly affect subsequent fingerprint identification and reduce the accuracy of identification.
为了至少部分地解决上述技术问题,本申请实施例提供一种图像拼接方法。该图像拼接方法在采用照明光照射三维目标以获取三维目标的纹理等信息(诸如指纹信息)的同时,还利用结构光照射三维目标以在其上投射结构光图案。通过结构光图案投射后形成的图像图案可以辅助获得待拼接数据并进行待拼接数据的对准,进而更准确地对待拼接数据进行拼接。In order to at least partly solve the above technical problem, an embodiment of the present application provides an image stitching method. In the image stitching method, the three-dimensional object is irradiated with illumination light to obtain information such as the texture of the three-dimensional object (such as fingerprint information), and at the same time, the three-dimensional object is illuminated with structured light to project a structured light pattern on it. The image pattern formed after projecting the structured light pattern can assist in obtaining the data to be spliced and aligning the data to be spliced, so as to splice the data to be spliced more accurately.
需注意,虽然上文在描述现有技术的技术问题时主要以指纹图像的拼接为例进行描述,但是根据本申请实施例的图像拼接方法可以应用于针对任意类型的目标采集获得的图像的拼接。根据本申请实施例的图像拼接方法可以通过图像采集系统200中的处理装置240执行。It should be noted that although the above descriptions of the technical problems of the prior art mainly take the mosaic of fingerprint images as an example, the image mosaic method according to the embodiment of the present application can be applied to the mosaic of images acquired for any type of target acquisition . The image stitching method according to the embodiment of the present application may be executed by the processing device 240 in the image acquisition system 200 .
下面结合图4描述根据本申请实施例的图像拼接方法。图4示出根据本申请一个实施例的图像拼接方法400的示意性流程图。如图4所示,图像拼接方法400包括步骤S410、S420、S430、S440和S450。The image stitching method according to the embodiment of the present application will be described below with reference to FIG. 4 . Fig. 4 shows a schematic flowchart of an image mosaic method 400 according to an embodiment of the present application. As shown in FIG. 4 , the image mosaic method 400 includes steps S410 , S420 , S430 , S440 and S450 .
步骤S410,获取第三目标图像和第四目标图像,其中,目标图像包括第三目标图像和第四目标图像,第三目标图像和第四目标图像通过在朝向目标采集区发射采集光的同时针对目标采集区采集图像的方式获得,采集光包括结构光和照明光,结构光的光束能够形成重建结构光图案和拼接结构光图案;第三目标图像和第四目标图像对应于目标采集区上的采集范围部分重叠。Step S410, acquiring a third target image and a fourth target image, wherein the target image includes a third target image and a fourth target image, and the third target image and the fourth target image are collected by emitting light toward the target collection area while aiming at The target acquisition area acquires images in a manner, the acquisition light includes structured light and illumination light, and the beam of the structured light can form a reconstructed structured light pattern and a spliced structured light pattern; the third target image and the fourth target image correspond to the The acquisition ranges partially overlap.
在本文中,“第一”、“第二”、“第三”、“第四”等术语仅用于区分目的,而并不表示顺序或其他特殊含义。In this document, terms such as "first", "second", "third", and "fourth" are used for distinction purposes only, and do not indicate order or other special meanings.
目标采集区可以是具有任意合适形状的物理空间区域,例如球形区域、立方体形区域或者其他任意规则或不规则区域等。可选地,可以通过上述图像采集系统针对目标采集区进行打光和图像采集。示例性地,图像采集系统可以包括一个或多个图像采集装置,用于针对目标采集区进行图像采集。示例性而非限制性地,一个或多个图像采集装置中的所有图像采集装置的成像范围可以覆盖整个目标采集区,方便采集到整个目标采集区的图 像。当将三维目标,例如用户的手指或手掌,放置在目标采集区之后,图像采集装置就可以采集到目标图像,例如用户的指纹或掌纹图像。The target collection area may be a physical space area with any suitable shape, such as a spherical area, a cubic area, or any other regular or irregular area. Optionally, lighting and image acquisition can be performed on the target acquisition area through the above-mentioned image acquisition system. Exemplarily, the image acquisition system may include one or more image acquisition devices, configured to perform image acquisition on the target acquisition area. Exemplarily but not limitatively, the imaging range of all image acquisition devices in one or more image acquisition devices can cover the entire target acquisition area, so as to facilitate the acquisition of images of the entire target acquisition area. picture. When a three-dimensional object, such as a user's finger or palm, is placed in the target acquisition area, the image acquisition device can acquire an image of the target, such as a fingerprint or palmprint image of the user.
在一个实施例中,图像采集系统可以包括单个图像采集装置。这种情况下,可选地,可以通过旋转和/或移动图像采集装置来采集多个角度下的目标图像。在另一个实施例中,图像采集系统可以包括多个图像采集装置。这种情况下,可选地,可以通过多个图像采集装置采集多个角度下的目标图像。即,多个图像采集装置中的任意两个图像采集装置的光轴可以彼此成预设角度,该预设角度不等于0,以便于采集不同角度下的目标图像。因此,无论采用单个或多个图像采集装置,都可以针对三维目标的不同部位(即不同角度)采集获得对应的目标图像。In one embodiment, an image acquisition system may include a single image acquisition device. In this case, optionally, the target images at multiple angles may be collected by rotating and/or moving the image collection device. In another embodiment, the image capture system may include multiple image capture devices. In this case, optionally, multiple image capture devices may be used to capture target images from multiple angles. That is, the optical axes of any two image capture devices among the plurality of image capture devices may form a preset angle with each other, and the preset angle is not equal to 0, so as to collect target images at different angles. Therefore, regardless of using single or multiple image acquisition devices, corresponding target images can be acquired for different parts (ie, different angles) of the three-dimensional target.
示例性地,第三目标图像和第四目标图像可以是上述一个或多个图像采集装置针对三维目标的不同部位进行图像采集所获得的图像。此外,第三目标图像和第四目标图像对应于目标采集区上的采集范围部分重叠。例如,第三目标图像和第四目标图像可以是由两个相邻放置的图像采集装置采集获得的图像。Exemplarily, the third target image and the fourth target image may be images acquired by the above-mentioned one or more image capture devices for image capture of different parts of the three-dimensional target. In addition, the third target image and the fourth target image are partially overlapped corresponding to the acquisition range on the target acquisition area. For example, the third target image and the fourth target image may be images captured by two adjacent image capture devices.
在一个示例中,图像采集系统可以包括按照预设位置排列的预设数目的摄像头。示例性地,预设数目是3,图像采集系统包括第一摄像头(下文简称C1)、第二摄像头(下文简称C2)、第三摄像头(下文简称C3)。In an example, the image acquisition system may include a preset number of cameras arranged in preset positions. Exemplarily, the preset number is 3, and the image acquisition system includes a first camera (hereinafter referred to as C1), a second camera (hereinafter referred to as C2), and a third camera (hereinafter referred to as C3).
示例性地,C1、C2、C3可以在同一平面呈预设角度排列,也可以在不同平面呈预设角度排列。可以理解,为了从不同位置和角度获取针对三维目标的不同部分的图像,三个摄像头可以按照相应的方式定位排列。本领域普通技术人员很容易理解其原理,在此不再赘述。并且,每两个相邻的摄像头所获取的图像对应于目标采集区上的采集范围部分重叠。示例性地,对于位置相邻的C1和C2,其分别在同一时刻或不同时刻获取的目标图像I1和目标图像I2对应于目标采集区上的采集范围部分重叠;对于位置相邻的C2和C3,其分别在同一时刻或不同时刻获取的目标图像I2和目标图像I3对应于目标采集区上的采集范围也是部分重叠的。Exemplarily, C1 , C2 , and C3 may be arranged at a preset angle on the same plane, or may be arranged at a preset angle on different planes. It can be understood that in order to obtain images of different parts of the three-dimensional object from different positions and angles, the three cameras can be positioned and arranged in a corresponding manner. Those of ordinary skill in the art can easily understand the principle, and details will not be repeated here. Moreover, the images acquired by every two adjacent cameras correspond to the partial overlap of the acquisition range on the target acquisition area. Exemplarily, for C1 and C2 adjacent to each other, the target image I1 and target image I2 acquired at the same time or at different times correspond to the partial overlap of the acquisition range on the target acquisition area; for C2 and C3 adjacent to each other , the target image I2 and the target image I3 respectively acquired at the same time or at different times are also partially overlapped corresponding to the acquisition ranges on the target acquisition area.
步骤S410中获取的“第三目标图像”和“第四目标图像”可以是任何两个待拼接的图像。例如,在需要拼接C1和C2采集的图像时,第三目标图像和第四目标图像分别为I1和I2,在需要拼接C2和C3采集的图像时,第三目标图像和第四目标图像分别为I2和I3。The "third target image" and "fourth target image" acquired in step S410 may be any two images to be spliced. For example, when the images collected by C1 and C2 need to be spliced, the third target image and the fourth target image are I1 and I2 respectively; when the images collected by C2 and C3 need to be spliced, the third target image and the fourth target image are respectively I2 and I3.
图像采集系统除包括上述图像采集装置以外,还可以包括能够发出采集光的装置,诸如照明系统和结构光投射器等。示例性地,对于人体数据采集而言,图像采集系统可以是三维人体扫描仪;对于指纹采集而言,图像采集系统可以是非接触3D指纹采集仪。In addition to the above-mentioned image acquisition device, the image acquisition system may also include a device capable of emitting collected light, such as an illumination system and a structured light projector. Exemplarily, for human body data collection, the image collection system may be a three-dimensional body scanner; for fingerprint collection, the image collection system may be a non-contact 3D fingerprint collection device.
在本申请实施例中,采集光包括结构光和照明光。结构光(Structured Light)可以是通过结构光投射器投射到被测物体(本文为三维目标)表面 的主动结构信息,如激光条纹、格雷码、正弦条纹等。结构光投射器例如红外激光发射器,可以发射出近红外光的特定图案。照明光可以是满足预设照明条件的可见光,便于通过图像采集装置获取满足预设质量要求的第三目标图像和第四目标图像。In the embodiment of the present application, the collected light includes structured light and illumination light. Structured Light can be projected onto the surface of the measured object (this article is a three-dimensional target) through a structured light projector Active structure information, such as laser stripes, Gray codes, sinusoidal stripes, etc. Structured light projectors, such as infrared laser emitters, emit specific patterns of near-infrared light. The illuminating light may be visible light meeting preset lighting conditions, so that the third target image and the fourth target image meeting preset quality requirements can be acquired by the image acquisition device.
示例性地,结构光的光束能够形成重建结构光图案和拼接结构光图案,重建结构光图案和拼接结构光图案不同。将重建结构光图案投射在三维目标表面可以用于三维目标的体表形态的重建,以获取待拼接数据;将拼接结构光图案投射在三维目标表面可以用于针对该三维目标的多个目标图像所对应的待拼接数据的对准及拼接合成。Exemplarily, the beam of structured light can form a reconstructed structured light pattern and a spliced structured light pattern, and the reconstructed structured light pattern and the spliced structured light pattern are different. Projecting the reconstructed structured light pattern on the surface of the 3D target can be used to reconstruct the body surface shape of the 3D target to obtain the data to be spliced; projecting the spliced structured light pattern on the surface of the 3D target can be used for multiple target images of the 3D target Alignment and splicing synthesis of the corresponding data to be spliced.
可以理解,重建结构光图案和拼接结构光图案投射到三维目标上之后,通常会产生一些光学形态畸变,可以在采集到的目标图像所对应的结构光图像中找到与重建结构光图案和拼接结构光图案分别对应的图像图案。It can be understood that after the reconstructed structured light pattern and the spliced structured light pattern are projected onto the 3D target, some optical distortions will usually occur, which can be found and reconstructed in the structured light image corresponding to the collected target image. The light patterns respectively correspond to the image patterns.
示例性地,重建结构光图案可以是手纹采集系统中的结构光投射器投射的结构光所形成的条纹图案。该条纹图案可以包括具有预设宽度的条纹。每个条纹的宽度可以是完全相同的,例如均是0.1mm宽;也可以是不完全相同的,例如包括三种宽度的条纹按顺序均匀间隔排布。条纹图案还可以包括明条纹和暗条纹。重建结构光图案由多个重建结构光单元组成,对于重建结构光图案为条纹图案的示例,其重建结构光单元为每个条纹。图5示出根据本申请一个实施例的重建结构光图案的示例。如图5所示,该重建结构光图案由15个条纹按照一定的顺序排列组成,每个条纹为一个重建结构光单元,例如第一暗条纹510、第一明条纹520和第二暗条纹530。在图5的示例中,第一明条纹520的宽度小于第一暗条纹510的宽度,第一暗条纹510的宽度大于第二暗条纹530的宽度。Exemplarily, the reconstructed structured light pattern may be a fringe pattern formed by structured light projected by a structured light projector in the handprint collection system. The stripe pattern may include stripes having a preset width. The width of each stripe can be completely the same, such as 0.1 mm wide; or not completely the same, for example, the stripes including three kinds of widths are evenly spaced in sequence. Stripe patterns can also include bright and dark stripes. The reconstructed structured light pattern is composed of a plurality of reconstructed structured light units, and for an example where the reconstructed structured light pattern is a stripe pattern, the reconstructed structured light unit is each stripe. Fig. 5 shows an example of reconstructing a structured light pattern according to an embodiment of the present application. As shown in Figure 5, the reconstructed structured light pattern consists of 15 stripes arranged in a certain order, and each stripe is a reconstructed structured light unit, such as the first dark stripe 510, the first bright stripe 520 and the second dark stripe 530 . In the example of FIG. 5 , the width of the first bright stripe 520 is smaller than the width of the first dark stripe 510 , and the width of the first dark stripe 510 is larger than the width of the second dark stripe 530 .
示例性地,拼接结构光图案可以是图像采集系统中的结构光投射器投射的结构光所形成的散点(或称散斑)图案,该散斑图案包括若干散斑,每个散斑可以为一个拼接结构光单元。上文描述了结构光单元的形式,此处不赘述。由于拍照是投影过程,散斑在照片上会发生不可预知的形变。因此可以通过一些特殊的散斑样式来确保多个相机识别到的散斑特征点不会出现误差。例如,将散斑设置成十字形状,利用十字的交叉点作为特征点来进行识别,有助于加大散斑识别的精准程度。Exemplarily, the mosaic structured light pattern may be a scatter (or speckle) pattern formed by structured light projected by a structured light projector in an image acquisition system, the speckle pattern includes several speckles, and each speckle may It is a spliced structured light unit. The form of the structured light unit has been described above, and will not be repeated here. Since taking pictures is a projection process, speckle will have unpredictable deformation on the picture. Therefore, some special speckle patterns can be used to ensure that the speckle feature points recognized by multiple cameras will not have errors. For example, setting the speckle in the shape of a cross, and using the cross points as feature points for recognition can help increase the accuracy of speckle recognition.
图6示出根据本申请一个实施例的包括拼接结构光图案在内的结构光图案的示例。示例性地,参考图6示出的结构光图案,其中包括由条纹组成的重建结构光图案、由方形散斑和圆形散斑组成的拼接结构光图案。如图6所示,其中的方形散斑610和圆形散斑620均可以视作一个拼接结构光单元。可以理解,图6中共包含有18个重建结构光单元和488个拼接结构光单元。Fig. 6 shows an example of a structured light pattern including a mosaic structured light pattern according to an embodiment of the present application. Exemplarily, refer to the structured light pattern shown in FIG. 6 , which includes a reconstructed structured light pattern composed of stripes, and a spliced structured light pattern composed of square speckles and circular speckles. As shown in FIG. 6 , both the square speckle 610 and the circular speckle 620 can be regarded as a spliced structured light unit. It can be understood that there are 18 reconstructed structured light units and 488 spliced structured light units included in FIG. 6 .
示例性地,重建结构光单元和拼接结构光单元可以存在一定的范围重 叠。例如,每个重建结构光单元上可以设置有多个拼接结构光单元。例如,图6示出的第一个明条纹上设置有25个暗散斑,第二个暗条纹上设置有26个明散斑。Exemplarily, the reconstructed structured light unit and the spliced structured light unit may have a certain range overlap. stack. For example, each reconstructed structured light unit may be provided with multiple spliced structured light units. For example, as shown in FIG. 6 , 25 dark speckles are set on the first bright stripe, and 26 bright speckles are set on the second dark stripe.
重建结构光单元和拼接结构光单元之间不存在边界重叠,以使得重建结构光单元和拼接结构光单元的检测不互相影响,从而使二者都可以被正确检测到。例如,图6示出的第二个暗条纹的矩形边界与位于其边界内的任一明散斑的边界是不存在重叠的。There is no border overlap between the reconstructed structured light unit and the spliced structured light unit, so that the detection of the reconstructed structured light unit and the spliced structured light unit do not affect each other, so that both can be correctly detected. For example, the rectangular boundary of the second dark fringe shown in FIG. 6 does not overlap with the boundary of any bright speckle within its boundary.
在一种具体的实施方式中,拼接结构光单元的分布密度大于重建结构光单元的分布密度,如此可通过分布密度较小的重建结构光单元进行三维重建,从而快速得到三维模型,再用分布密度大的拼接结构光单元进行精细拼接,从而保证拼接精度。拼接结构光单元的分布密度不能过小,否则不能获得足够的拼接精度;拼接结构光单元的分布密度不能过大,如果分布密度大于形变误差,容易错误配对。例如,参考图6,散斑的分布密度大于条纹的分布密度。In a specific implementation, the distribution density of the spliced structured light units is greater than the distribution density of the reconstructed structured light units, so that the 3D reconstruction can be performed through the reconstructed structured light units with a smaller distribution density, so as to quickly obtain a 3D model, and then use the distributed The spliced structured light units with high density are finely spliced to ensure splicing accuracy. The distribution density of the spliced structured light units cannot be too small, otherwise sufficient splicing accuracy cannot be obtained; the distribution density of the spliced structured light units cannot be too large, if the distribution density is greater than the deformation error, it is easy to be wrongly matched. For example, referring to FIG. 6 , the distribution density of speckles is greater than that of stripes.
步骤S420,确定与第三目标图像对应的第一结构光图像和第一照明光图像;确定与第四目标图像对应的第二结构光图像和第二照明光图像。其中,第一结构光图像中包含与重建结构光图案相对应的第一重建图像图案和与拼接结构光图案相对应的第一拼接图像图案;第二结构光图像中包含与重建结构光图案相对应的第二重建图像图案和与拼接结构光图案相对应的第二拼接图像图案。Step S420, determining the first structured light image and the first illumination light image corresponding to the third target image; determining the second structured light image and the second illumination light image corresponding to the fourth target image. Wherein, the first structured light image includes the first reconstructed image pattern corresponding to the reconstructed structured light pattern and the first mosaic image pattern corresponding to the stitched structured light pattern; the second structured light image includes the first reconstructed image pattern corresponding to the reconstructed structured light pattern. The corresponding second reconstructed image pattern and the second stitched image pattern corresponding to the stitched structured light pattern.
在三维目标为手指的示例中,通过上述步骤S210,可以获取到针对该手指采集的n个目标图像,n例如2,则可以得到分别从不同位置/角度采集到的第三目标图像和第四目标图像。并且根据前述描述,可以理解,在第三目标图像和第四目标图像中均可以包括以下图像信息:由结构光投射器向三维目标发射结构光而获取的结构光图像信息、由照明系统向三维目标发射照明光而获取的照明光图像信息。在本申请实施例中,以第一结构光图像表示与第三目标图像对应的结构光图像信息,以第一照明光图像表示与第三目标图像对应的照明光图像信息;相应地,以第二结构光图像表示与第四目标图像对应的结构光图像信息,以第二照明光图像表示与第四目标图像对应的照明光图像信息。In the example where the three-dimensional target is a finger, through the above step S210, n target images collected for the finger can be obtained, and n is 2 for example, then the third target image and the fourth target image collected from different positions/angles can be obtained. target image. And according to the foregoing description, it can be understood that both the third target image and the fourth target image may include the following image information: the structured light image information obtained by emitting structured light to the three-dimensional target from the structured light projector, the three-dimensional target image information transmitted by the lighting system Illuminating light image information acquired by the target emitting illuminating light. In the embodiment of the present application, the structured light image information corresponding to the third target image is represented by the first structured light image, and the illumination light image information corresponding to the third target image is represented by the first illuminated light image; correspondingly, the structured light image information corresponding to the third target image is represented by the first structured light image The second structured light image represents the structured light image information corresponding to the fourth target image, and the second illuminating light image represents the illuminating light image information corresponding to the fourth target image.
示例性地,如果结构光的颜色属于特定的颜色通道,例如其为红色结构光,则可以直接从任一目标图像中提取该通道的图像信息,进而获得对应的结构光图像。如果结构光的颜色包含多个颜色通道,例如其为黄色结构光,则可以从任一目标图像中提取多个颜色通道的图像信息进行融合,来获得对应的结构光图像。Exemplarily, if the color of the structured light belongs to a specific color channel, for example, it is red structured light, the image information of this channel can be directly extracted from any target image, and then the corresponding structured light image can be obtained. If the color of structured light contains multiple color channels, for example, it is yellow structured light, image information of multiple color channels can be extracted from any target image for fusion to obtain a corresponding structured light image.
示例性地,参考图7a-7c,示出三个不同角度下的指纹采集所对应的三个结构光图像的示意图。示例性地,图7a-7c示出的三个结构光图像,由 左至右地,可以是通过对由非接触3D指纹采集仪中的三个摄像头C1、C2、C3采集的三个目标图像提取r通道获得的,上述三个目标图像为C1、C2、C3在结构光投射器采用红光向三维目标发射类似图6示出的结构光图案的情况下分别采集到的。由图7a-7c可以看出,每个结构光图像中包含与图6示出的重建结构光图案(明条纹和暗条纹)相对应的重建图像图案、与图6示出的拼接结构光图案(方形散斑和圆形散斑)相对应的拼接图像图案。For example, referring to Figs. 7a-7c, there are shown schematic diagrams of three structured light images corresponding to fingerprint collection at three different angles. Exemplarily, the three structured light images shown in Fig. 7a-7c are obtained by From left to right, it can be obtained by extracting r-channels from the three target images collected by the three cameras C1, C2, and C3 in the non-contact 3D fingerprint collector. The above three target images are C1, C2, and C3 in the The structured light projectors use red light to emit a structured light pattern similar to that shown in FIG. 6 to a three-dimensional target and collect them respectively. It can be seen from Figures 7a-7c that each structured light image contains a reconstructed image pattern corresponding to the reconstructed structured light pattern (bright fringe and dark fringe) shown in FIG. (square speckle and circular speckle) corresponding stitched image pattern.
示例性地,由于在扫描过程中可能存在诸多干扰因素,导致获得的结构光图像存在一些噪点或者干扰区域,可能会对后续步骤带来不利的影响,例如增加计算量或者影响后续测量和识别的准确性,因此可以可选地对获取的结构光图像进行去干扰处理。示例性地,步骤S420还可以包括:对于每个目标图像,获取初始结构光图像;对初始结构光图像进行去干扰处理,以获得去干扰后的结构光图像。后续在步骤S430等步骤中基于去干扰后的结构光图像进行处理。下面参考图8作进一步阐释。当采集的图像中存在被试者的指甲时,由于被试者的指甲并非是指纹采集的目标区域,当其不可避免地出现在初始结构光图像中时,可能会影响最终生成的指纹图像的准确性,因此,可以选择将其进行去干扰处理。示例性地,可以采用语义分割模型(例如Unet模型)识别初始结构光图像中的指甲,并将其作为噪点进行去除操作。图8示出了去除指甲后的结构光图像的示意图。For example, since there may be many interference factors during the scanning process, there may be some noise or interference areas in the obtained structured light image, which may have adverse effects on subsequent steps, such as increasing the amount of calculation or affecting the accuracy of subsequent measurement and identification. Accuracy, so the acquired structured light image can be optionally de-interferenced. Exemplarily, step S420 may further include: for each target image, acquiring an initial structured light image; performing de-interference processing on the initial structured light image to obtain a de-interferenced structured light image. Subsequent processing is performed based on the de-interferenced structured light image in steps such as step S430. Further explanation will be given below with reference to FIG. 8 . When the subject's nails exist in the collected image, since the subject's nails are not the target area of fingerprint collection, when they inevitably appear in the initial structured light image, it may affect the quality of the final generated fingerprint image. Accuracy, therefore, can optionally be denominated. Exemplarily, a semantic segmentation model (such as a Unet model) may be used to identify nails in the initial structured light image, and remove them as noise points. Fig. 8 shows a schematic diagram of a structured light image after nail removal.
如上所述,照明光可以包括蓝光和/或绿光或者其他颜色的光。类似地,对于照明光只包含一个颜色通道的情况,可以直接从目标图像中提取该通道下的图像信息,获取对应的照明光图像。例如,可以直接提取蓝光通道下的图像信息,获取单通道照明光图像,作为相应的照明光图像。对于照明光包含多个颜色通道的情况,可以首先从目标图像中提取在多个颜色通道中的每个颜色通道下的图像信息,获得对应的单通道照明光图像,然后将多个单通道照明光图像进行融合,以获得最终的照明光图像。As mentioned above, the illumination light may include blue light and/or green light or light of other colors. Similarly, for the case where the illumination light contains only one color channel, the image information under this channel can be directly extracted from the target image to obtain the corresponding illumination light image. For example, the image information under the blue light channel can be directly extracted to obtain a single-channel illumination light image as a corresponding illumination light image. For the case where the illumination light contains multiple color channels, the image information under each color channel in the multiple color channels can be extracted from the target image first, and the corresponding single-channel illumination light image can be obtained, and then the multiple single-channel illumination The light images are fused to obtain the final illuminated light image.
示例性地,步骤S420还可以包括:对于每个目标图像,对该目标图像进行颜色通道的提取,以获取与至少一种颜色通道一一对应的至少一种单通道照明光图像;对至少一种单通道照明光图像进行融合处理,以获得与该目标图像相对应的照明光图像,其中,至少一种颜色通道是照明光所包含的颜色通道。例如,采用C1、C2、C3获取针对三维目标的包括蓝光和绿光通道的目标图像之后,可以分别从C1采集的目标图像中提取蓝光和绿光通道的图像信息,获得蓝光通道照明光图像和绿光通道照明光图像,然后将二者融合以获得与C1采集的目标图像相对应的照明光图像,类似地,可以进一步确定与C2采集的目标图像相对应的照明光图像、与C3采集的目标图像相对应的照明光图像。Exemplarily, step S420 may further include: for each target image, extracting the color channel of the target image to obtain at least one single-channel illumination light image corresponding to at least one color channel; A single-channel illumination light image is fused to obtain an illumination light image corresponding to the target image, wherein at least one color channel is a color channel included in the illumination light. For example, after using C1, C2, and C3 to acquire target images including blue light and green light channels for a three-dimensional target, the image information of blue light and green light channels can be extracted from the target image collected by C1 respectively to obtain the blue light channel illumination light image and The illumination light image of the green light channel, and then fuse the two to obtain the illumination light image corresponding to the target image collected by C1. Similarly, the illumination light image corresponding to the target image collected by C2 and the illumination light image collected by C3 The illumination light image corresponding to the target image.
步骤S430,基于第一重建图像图案中包含的重建组成单元确定第一变换关系,根据第一变换关系对第一照明光图像进行变换,得到第一待拼接 数据;基于第二重建图像图案中包含的重建组成单元确定第二变换关系,根据第二变换关系对第二照明光图像进行变换,得到第二待拼接数据;其中,第一变换关系和第二变换关系为展开变换关系,第一待拼接数据为第一展开图像,第二待拼接数据为第二展开图像;或者,第一变换关系和第二变换关系对应的变换为三维重建,第一待拼接数据为第一目标模型,第二待拼接数据为第二目标模型。Step S430: Determine the first transformation relationship based on the reconstruction components contained in the first reconstruction image pattern, and transform the first illumination light image according to the first transformation relationship to obtain the first image to be spliced. Data; the second transformation relationship is determined based on the reconstruction component units contained in the second reconstruction image pattern, and the second illumination light image is transformed according to the second transformation relationship to obtain the second data to be spliced; wherein, the first transformation relationship and the second The transformation relationship is an expansion transformation relationship, the first data to be spliced is the first expansion image, and the second data to be spliced is the second expansion image; or, the transformation corresponding to the first transformation relationship and the second transformation relationship is three-dimensional reconstruction, the first to be spliced The spliced data is the first target model, and the second data to be spliced is the second target model.
示例性地,结构光形成的诸如条纹图案等的重建结构光图案,投射在三维物体表面后,可以发生光学形态畸变。例如,对于结构光投射器发出的结构光形成的重建结构光图案是直条纹的情况,其投射在手指的指腹上则可以呈现如图7a-7c所示的类弧形条纹。示例性地,可以依据光学原理并综合结构光投射器、图像采集装置各自的固有参数与相对位置关系,分析重建图像图案中的每个重建组成单元(例如条纹)的位置和形态,据此对每个目标图像所对应的照明光图像分别进行变换,得到变换后的待拼接数据。Exemplarily, after the reconstructed structured light pattern formed by the structured light, such as the fringe pattern, is projected on the surface of the three-dimensional object, optical distortion may occur. For example, when the reconstructed structured light pattern formed by the structured light emitted by the structured light projector is straight stripes, it can appear arc-like stripes as shown in Figures 7a-7c when projected on the pulp of a finger. Exemplarily, the position and shape of each reconstructed component unit (such as stripes) in the reconstructed image pattern can be analyzed based on the optical principle and the inherent parameters and relative positional relationship of the structured light projector and the image acquisition device, and the The illumination light images corresponding to each target image are respectively transformed to obtain the transformed data to be spliced.
示例性地,可以基于图7a中包含的条纹确定第一种变换关系,根据第一种变换关系对与其对应的照明光图像进行变换,得到第一组待拼接数据;相应地,可以基于图7b中包含的条纹确定第二种变换关系,基于图7c中包含的条纹确定第三种变换关系,并进一步分别得到第二组待拼接数据和第三组待拼接数据。例如,在需要对图7a所对应的角度下的照明光图像和图7b所对应的角度下的照明光图像进行拼接时,第一变换关系为第一种变换关系,第二变换关系为第二种变换关系,第一待拼接数据为第一组待拼接数据,第二待拼接数据为第二组待拼接数据。Exemplarily, the first transformation relationship can be determined based on the stripes contained in Figure 7a, and the corresponding illumination light image can be transformed according to the first transformation relationship to obtain the first set of data to be spliced; correspondingly, based on Figure 7b Determine the second transformation relationship based on the stripes contained in , determine the third transformation relationship based on the stripes contained in FIG. 7 c , and further obtain the second group of data to be spliced and the third group of data to be spliced. For example, when it is necessary to stitch the illumination light image under the angle corresponding to Fig. 7a and the illumination light image under the angle corresponding to Fig. 7b, the first transformation relation is the first transformation relation, and the second transformation relation is the second A conversion relationship, the first data to be spliced is the first group of data to be spliced, and the second data to be spliced is the second group of data to be spliced.
在一个示例中,第一变换关系和第二变换关系为展开变换关系,第一待拼接数据为第一展开图像,第二待拼接数据为第二展开图像。在该示例中,第一变换关系和第二变换关系可以是从二维照明光图像变换为二维展开图像的变换关系。对于每个目标图像,可以直接将步骤S420中获取的二维照明光图像中的每个像素点根据对应的变换关系进行一一转换,得到新的二维图像数据。该示例的具体实现流程将在下文进行具体阐释。In one example, the first transformation relationship and the second transformation relationship are expansion transformation relationships, the first data to be spliced is a first expansion image, and the second data to be spliced is a second expansion image. In this example, the first transformation relationship and the second transformation relationship may be a transformation relationship from a two-dimensional illumination light image to a two-dimensional expanded image. For each target image, each pixel in the two-dimensional illumination light image acquired in step S420 can be directly converted one by one according to the corresponding transformation relationship to obtain new two-dimensional image data. The specific implementation process of this example will be explained in detail below.
在另一个示例中,第一变换关系和第二变换关系对应的变换为三维重建,第一待拼接数据为第一目标模型,第二待拼接数据为第二目标模型。在该示例中,对于每个目标图像,可以根据其重建组成单元的位置和形态,对目标图像进行三维重建,以得到对应的变换关系。并将获取的二维照明光图像根据该变换关系映射至重建的三维模型上,以获得相应的三维目标模型。该示例的具体实现流程将在下文进行具体阐释。In another example, the transformation corresponding to the first transformation relationship and the second transformation relationship is three-dimensional reconstruction, the first data to be spliced is the first target model, and the second data to be spliced is the second target model. In this example, for each target image, three-dimensional reconstruction may be performed on the target image according to the position and shape of its reconstructed constituent units, so as to obtain the corresponding transformation relationship. And the acquired two-dimensional illumination light image is mapped to the reconstructed three-dimensional model according to the transformation relationship, so as to obtain a corresponding three-dimensional target model. The specific implementation process of this example will be explained in detail below.
步骤S440,根据第一映射拼接组成单元的第一映射位置和第二映射拼接组成单元的第二映射位置,确定第一映射拼接组成单元和第二映射拼接组成单元之间的匹配关系;其中,第一映射位置用于表示第一映射拼接组 成单元的位置,第二映射位置用于表示第二映射拼接组成单元的位置;第一映射位置等于对第一结构光图像中拼接组成单元的位置进行第一变换关系对应的运算所得到的位置,第二映射位置等于对第二结构光图像中拼接组成单元的位置进行第二变换关系对应的运算所得到的位置。Step S440, according to the first mapping position of the first mapping and splicing component unit and the second mapping position of the second mapping and splicing component unit, determine the matching relationship between the first mapping and splicing component unit and the second mapping and splicing component unit; wherein, The first map position is used to represent the first map splice group The position of the forming unit, the second mapping position is used to indicate the position of the second mapping and splicing component unit; the first mapping position is equal to the position obtained by performing the operation corresponding to the first transformation relationship on the position of the splicing component unit in the first structured light image , the second mapping position is equal to the position obtained by performing an operation corresponding to the second transformation relationship on the position of the mosaic component unit in the second structured light image.
示例性地,再次参考图6示出的结构光图案。根据前述描述,其中的每个圆形散斑和方形散斑均可以视作一个拼接组成单元。由于结构光光束形成的重建结构光图案和拼接结构光图案可以按照预设的规律排布,因此在第一结构光图像中,重建组成单元与拼接组成单元之间也存在相应的位置关系。在步骤S430中获得了第一变换关系和第二变换关系。因此,可以对第一结构光图像中拼接组成单元的位置进行第一变换关系对应的运算,可以得到第一映射拼接组成单元与其相应的位置,即第一映射位置;类似地,也可以得到第二映射拼接组成单元和第二映射位置。Exemplarily, refer again to the structured light pattern shown in FIG. 6 . According to the foregoing description, each of the circular speckle and the square speckle can be regarded as a mosaic unit. Since the reconstructed structured light pattern and the spliced structured light pattern formed by the structured light beam can be arranged according to a preset rule, there is also a corresponding positional relationship between the reconstructed component unit and the spliced component unit in the first structured light image. In step S430, the first transformation relationship and the second transformation relationship are obtained. Therefore, the operation corresponding to the first transformation relationship can be performed on the position of the mosaic component unit in the first structured light image, and the corresponding position of the first mapped mosaic component unit can be obtained, that is, the first mapping position; similarly, the second can also be obtained The two-map stitches the constituent units and the second-mapped position.
很容易理解,每个映射拼接组成单元可以视作用于对待拼接数据进行对齐的对位标记单元,可以将两个待拼接数据中位于近似位置的对位标记单元进行配对,从而在后续拼接过程中基于配对的对位标记单元对待拼接数据进行拼接。示例性地,第一映射拼接组成单元和第二映射拼接组成单元之间的匹配关系可以包括一一对应的匹配散斑对。示例性而非限制性地,可以采用Unet模型对相邻两个图像采集装置所对应的两个结构光图像中的拼接组成单元进行识别,识别出每个结构光图像中的拼接组成单元所在的位置坐标;根据展开变换关系将每个拼接组成单元的位置坐标进行转换,得到每个拼接组成单元在展开图像所在的图像平面上的位置坐标,由此得到第一映射拼接组成单元的第一映射位置和第二映射拼接组成单元的第二映射位置。在另一例中,可以根据展开变换关系对相邻两个图像采集装置所对应的两个结构光图像分别进行转换,在转换后的结构光图像中对映射拼接组成单元进行识别,得到每个映射拼接组成单元在展开图像所在的图像平面上的位置坐标,由此得到第一映射拼接组成单元的第一映射位置和第二映射拼接组成单元的第二映射位置。可以理解的是以上两种方式得到的映射位置是近似相等的,因此步骤S440中的“等于”可理解为近似等于。之后,可以将在展开图像所在的图像平面上满足预设距离要求的映射拼接组成单元(第一映射拼接组成单元和第二映射拼接组成单元)进行一对一匹配,得到映射拼接组成单元匹配对。需注意,第一展开图像和第二展开图像在同一图像平面上。It is easy to understand that each mapping splicing unit can be regarded as an alignment mark unit for aligning the data to be spliced, and the alignment mark units at approximate positions in the two data to be spliced can be paired, so that in the subsequent splicing process The data to be spliced is spliced based on paired alignment marker units. Exemplarily, the matching relationship between the first mapping mosaic component unit and the second mapping mosaic component unit may include a one-to-one correspondence of matching speckle pairs. By way of example and not limitation, the Unet model can be used to identify the mosaic constituent units in two structured light images corresponding to two adjacent image acquisition devices, and identify where the mosaic constituent units in each structured light image are located. Position coordinates: transform the position coordinates of each stitching component unit according to the expansion transformation relationship, and obtain the position coordinates of each stitching component unit on the image plane where the expanded image is located, thereby obtaining the first mapping of the first mapping stitching component unit The position and the second map are concatenated to form the second map position of the unit. In another example, the two structured light images corresponding to two adjacent image acquisition devices can be converted respectively according to the expansion transformation relationship, and the mapping and splicing constituent units are identified in the converted structured light images to obtain each mapping The position coordinates of the mosaic composition unit on the image plane where the expanded image is located, thereby obtaining the first mapping position of the first mapping mosaic composition unit and the second mapping position of the second mapping mosaic composition unit. It can be understood that the mapping positions obtained by the above two methods are approximately equal, so "equal" in step S440 can be understood as approximately equal. Afterwards, one-to-one matching can be performed on the mapping mosaic constituent units (the first mapping mosaic constituent unit and the second mapping mosaic constituent unit) that meet the preset distance requirement on the image plane where the unfolded image is located, to obtain a matching pair of mapping mosaic constituent units . It should be noted that the first expanded image and the second expanded image are on the same image plane.
在第一变换关系和第二变换关系对应的变换为三维重建的示例中,步骤S440可以将每个结构光图像中的每个拼接组成单元一一映射至三维空间(第一目标模型和第二目标模型所在的空间)中,得到每个映射拼接组成单元的映射位置。对于两个或两个以上的目标图像,可以得到相应数目的三维目标模型,三维目标模型可以采用点云表示。对任意两个具有重叠点云的模型,可以根据映射拼接组成单元的映射位置,对满足预设距离要 求的映射拼接组成单元进行位置的匹配,得到相应的匹配关系。In the example where the transformation corresponding to the first transformation relationship and the second transformation relationship is 3D reconstruction, step S440 can map each mosaic component unit in each structured light image to the 3D space one by one (the first object model and the second In the space where the target model is located), the mapping position of each mapping splicing component unit is obtained. For two or more target images, a corresponding number of 3D target models can be obtained, and the 3D target models can be represented by point clouds. For any two models with overlapping point clouds, the mapping positions of the constituent units can be spliced according to the mapping, and the preset distance requirements can be satisfied. The corresponding matching relationship is obtained by splicing the constituent units according to the requested mapping and matching the positions.
步骤S450,基于匹配关系,对第一待拼接数据和第二待拼接数据进行拼接,以获得整体数据。Step S450, based on the matching relationship, splicing the first data to be spliced and the second data to be spliced to obtain overall data.
上述步骤S440确定了第一映射拼接组成单元和第二映射拼接组成单元之间的匹配关系。例如,确定第一映射拼接组成单元与第二映射拼接组成单元满足预设的距离要求,可以作为一组映射拼接组成单元对。例如,第一映射位置为坐标(2,9),第二映射位置为坐标(3,9),二者属于互相匹配的映射拼接组成单元。据此,可以确定至少部分第一映射拼接组成单元和至少部分第二映射拼接组成单元之间的匹配关系。基于该匹配关系可以进一步确定第一待拼接数据中的至少部分像素点和第二待拼接数据中的至少部分像素点之间的对齐关系。由此,可以将第一待拼接数据和第二待拼接数据中的相应像素点按照对齐关系进行拼接合成,以获得整体数据。The above step S440 determines the matching relationship between the first mapping mosaic component unit and the second mapping mosaic component unit. For example, if it is determined that the first mapped mosaic component unit and the second mapped mosaic component unit meet a preset distance requirement, they may be regarded as a group of mapped mosaic component unit pairs. For example, the first mapping position is coordinates (2, 9), and the second mapping position is coordinates (3, 9), and the two belong to matching mapping and splicing constituent units. Accordingly, a matching relationship between at least part of the first mapping and splicing constituent units and at least part of the second mapping and splicing constituent units can be determined. Based on the matching relationship, an alignment relationship between at least some of the pixels in the first data to be spliced and at least some of the pixels in the second data to be spliced can be further determined. Thus, the corresponding pixel points in the first data to be stitched and the second data to be stitched can be stitched and synthesized according to the alignment relationship, so as to obtain the overall data.
示例性地,基于同样的原理,对于包括第三待拼接数据的情况,可以在获得了第一待拼接数据和第二待拼接数据的整体数据的基础上,将获得的整体数据与第三待拼接数据根据相应的匹配关系进行拼接合成,最终得到关于三者的整体数据。当然也可以分别根据第一待拼接数据、第二待拼接数据和第三待拼接数据中两两之间的匹配关系,选取其中一个作为基准数据(例如选取中间摄像头所对应的第二待拼接数据为基准待拼接数据),将三者一次性拼接得到整体数据。本领域普通技术人员可以理解上述方法的实现原理,在此不再赘述。Exemplarily, based on the same principle, for the case of including the third data to be spliced, on the basis of obtaining the overall data of the first data to be spliced and the second data to be spliced, the obtained overall data can be combined with the third data to be spliced. The spliced data is spliced and synthesized according to the corresponding matching relationship, and finally the overall data about the three are obtained. Of course, it is also possible to select one of them as the reference data according to the matching relationship between the first data to be spliced, the second data to be spliced, and the third data to be spliced respectively (such as selecting the second data to be spliced corresponding to the middle camera) As the benchmark data to be spliced), the three are spliced at one time to obtain the overall data. Those of ordinary skill in the art can understand the implementation principle of the above method, which will not be repeated here.
根据上述技术方案,获取结构光和照明光环境下采集的多个目标图像;根据目标图像所对应的结构光图像中的重建组成单元对目标图像所对应的照明光图像进行变换,得到与每个目标像对应的待拼接数据;根据结构光图像中的拼接组成单元对多个待拼接数据做精准拼接合成,得到合成后的整体数据。与现有技术不同,本申请实施例的图像拼接方法,将结构光图像中的不同组成单元作用于不同的处理功能,一部分用作数据形态的变换,另一部分则用作数据之间的精确对齐。通过这种方案处理后得到的目标对象的整体数据的拼接精确度更高,拼接效果更好,并且对应用该数据的衍生领域更友好。例如,通过该方案获取的模拟滚动捺印指纹图像进行指纹识别时能够兼容传统捺印方式采集的指纹图像,可以极大提高指纹识别的精度,进而可以显著提高用户的使用体验。According to the above technical solution, a plurality of target images collected under structured light and illumination light environment are obtained; according to the reconstruction component units in the structured light image corresponding to the target image, the illumination light image corresponding to the target image is transformed, and each The data to be stitched corresponding to the target image; according to the stitching components in the structured light image, multiple data to be stitched are accurately stitched and synthesized to obtain the synthesized overall data. Different from the existing technology, the image mosaic method of the embodiment of the present application uses different components in the structured light image to act on different processing functions, one part is used for data form transformation, and the other part is used for precise alignment between data . The splicing accuracy of the overall data of the target object obtained after processing through this scheme is higher, the splicing effect is better, and it is more friendly to the derived field where the data is applied. For example, the simulated scrolling fingerprint image obtained through this scheme can be compatible with the fingerprint image collected by the traditional printing method, which can greatly improve the accuracy of fingerprint recognition, and then can significantly improve the user experience.
示例性地,在步骤S430中,第一变换关系和第二变换关系为展开变换关系,第一待拼接数据为第一展开图像,第二待拼接数据为第二展开图像。基于第一重建图像图案中包含的重建组成单元确定第一变换关系,可以包括:根据第一重建图像图案中包含的重建组成单元的位置进行三维重建,获得第一模型;根据第一重建图像图案中包含的重建组成单元的位置和第一模型得到第一一阶变换关系;将第一模型沿公共坐标轴展开,获得第一 模型上各点的展开位置;根据第一模型和第一模型上各点的展开位置得到第一二阶变换关系;根据第一一阶变换关系和第一二阶变换关系得到第一变换关系。此外,基于第二重建图像图案中包含的重建组成单元确定第二变换关系,包括:根据第二重建图像图案中包含的重建组成单元的位置进行三维重建,获得第二模型,根据第二重建图像图案中包含的重建组成单元的位置和第二模型得到第二一阶变换关系;将第二模型沿公共坐标轴展开,获得第二模型上各点的展开位置;根据第二模型和第二模型上各点的展开位置得到第二二阶变换关系;根据第二一阶变换关系和第二二阶变换关系得到第二变换关系。Exemplarily, in step S430, the first transformation relationship and the second transformation relationship are expansion transformation relationships, the first data to be spliced is a first expansion image, and the second data to be spliced is a second expansion image. Determining the first transformation relationship based on the reconstructed component units contained in the first reconstructed image pattern may include: performing three-dimensional reconstruction according to the positions of the reconstructed component units contained in the first reconstructed image pattern to obtain a first model; according to the first reconstructed image pattern The position of the reconstructed constituent unit contained in and the first model obtain the first first-order transformation relationship; expand the first model along the common coordinate axis to obtain the first The unfolded position of each point on the model; the first second-order transformation relationship is obtained according to the first model and the unfolded positions of each point on the first model; the first transformation relationship is obtained according to the first first-order transformation relationship and the first second-order transformation relationship. In addition, determining the second transformation relationship based on the reconstructed constituent units contained in the second reconstructed image pattern includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the second reconstructed image pattern to obtain a second model, and according to the second reconstructed image The position of the reconstructed unit contained in the pattern and the second model obtain the second first-order transformation relationship; expand the second model along the common coordinate axis to obtain the expanded position of each point on the second model; according to the second model and the second model The expansion position of each point above is used to obtain the second second-order transformation relationship; the second transformation relationship is obtained according to the second first-order transformation relationship and the second second-order transformation relationship.
下面以图8为第一结构光图像作为示例对上述步骤的实现方法进行具体阐释。The implementation method of the above steps will be specifically explained below by taking FIG. 8 as an example of the first structured light image.
图8的重建组成单元为若干个宽度不完全相同的明暗条纹,可以根据每个条纹在图像坐标系下的位置进行三维重建,得到第一模型。示例性地,图8中示出了位置和形态不同的多个条纹,可以按照预设的规则依次将每个条纹进行标示。例如,可以将其中最细的条纹810标示为i=0,向上依次标示其余的条纹为i=-1,-2,-3,……,-6,往下依次标示其余的条纹为i=1,2,3,……,11。The reconstruction unit in Fig. 8 is a number of bright and dark stripes with different widths, and the first model can be obtained by performing three-dimensional reconstruction according to the position of each stripe in the image coordinate system. Exemplarily, FIG. 8 shows a plurality of stripes with different positions and shapes, and each stripe may be marked sequentially according to a preset rule. For example, the thinnest stripe 810 can be marked as i=0, the remaining stripes can be marked as i=-1,-2,-3,...,-6 upwards, and the rest of the stripes can be marked as i= 1, 2, 3, ..., 11.
示例性地,可将结构光图像输入用于检测重建组成单元的神经网络,得到结构光图像中包含的各重建组成单元的标示及各重建组成单元在图像坐标系下的位置。示例性地,重建组成单元在图像坐标系下的位置可用重建组成单元的中线以预设点密度分布的若干点在图像坐标系下的坐标表示。可理解的是,重建组成单元的位置也可用其他点例如重建组成单元的上下边缘点来表示,然而,当结构光图像像素值变化带来的检测出的重建组成单元的尺寸发生变化时(边缘往往随之发生变化),用中线上的点是更为鲁棒的。容易理解,每个条纹中线上获取的点的数量可以不完全相同。将每个重建组成单元中线上的点的坐标设为(x,y)。Exemplarily, the structured light image can be input to the neural network used to detect the reconstruction components, and the labels of the reconstruction components included in the structured light image and the positions of the reconstruction components in the image coordinate system can be obtained. Exemplarily, the position of the reconstructed component unit in the image coordinate system may be represented by the coordinates of several points distributed with a preset point density on the centerline of the reconstructed component unit in the image coordinate system. It can be understood that the position of the reconstruction unit can also be represented by other points such as the upper and lower edge points of the reconstruction unit. However, when the size of the detected reconstruction unit changes due to the change of the pixel value of the structured light image (the edge tend to change with it), it is more robust to use the points on the midline. It is easy to understand that the number of points acquired on the center line of each stripe may not be exactly the same. Let the coordinates of a point on the midline of each reconstruction constituent unit be (x, y).
示例性地,可以通过相机的投影矩阵与结构光的光面参数,对每个重建组成单元进行三维重建。示例性地,对于重建组成单元是条纹的情况,可以得到每个条纹中线上每个点的三维坐标(u,v,w)。Exemplarily, three-dimensional reconstruction can be performed on each reconstruction component unit through the projection matrix of the camera and the light surface parameters of the structured light. Exemplarily, for the case where the reconstruction unit is a fringe, the three-dimensional coordinates (u, v, w) of each point on the centerline of each fringe can be obtained.
例如,对于i=1的条纹,存在以下公式(1):
For example, for a stripe with i=1, the following formula (1) exists:
其中,p表示相机的投影矩阵,作为已知量;(u,v,w)表示条纹中线上的点进行三维重建的三维坐标,作为未知量;(x,y)表示结构光图像中的重建结构光图案上i=1的条纹中线上点的坐标,作为已知量;α是距离系数,作为未知量。Among them, p represents the projection matrix of the camera, as a known quantity; (u, v, w) represents the 3D coordinates of the point on the fringe center line for 3D reconstruction, as an unknown quantity; (x, y) represents the reconstruction in the structured light image The coordinates of the point on the center line of the fringe with i=1 on the structured light pattern are taken as a known quantity; α is a distance coefficient and taken as an unknown quantity.
对于p是3*3矩阵的情况,本领域普通技术人员容易理解,由以上公 式(1)可以得到3个方程,并可以进一步求解其中的未知量u、v、w、α。For the case where p is a 3*3 matrix, it is easy for those of ordinary skill in the art to understand, from the above formula Formula (1) can get 3 equations, and can further solve the unknown quantities u, v, w, α among them.
示例性地,还可以在进行目标图像采集之前可以通过对每个相机的标定过程,获取各重建组成单元(例如条纹)所在光面的法向量。图9示出了通过相机采集结构光条纹的简单示意图。图9示出了条纹i=1所在的第一光面910和条纹i=2所在的第二光面920。基于对相机进行标定的结果可以得到结构光图像中每个条纹i=n的三维坐标和其所在光面的法向量的对应关系,由此可以得到与每个条纹中线上的点对应的第4个方程,如以下公式(2)。
Exemplarily, before the target image is collected, the normal vector of the light plane where each reconstruction component (for example, fringe) is located may be obtained through a calibration process for each camera. Fig. 9 shows a simple schematic diagram of collecting structured light stripes by a camera. FIG. 9 shows the first optical surface 910 where the stripe i=1 is located and the second optical surface 920 where the stripe i=2 is located. Based on the results of camera calibration, the corresponding relationship between the three-dimensional coordinates of each stripe i=n in the structured light image and the normal vector of the light surface where it is located can be obtained, and the fourth point corresponding to the point on the center line of each stripe can be obtained. equations, such as the following formula (2).
其中,是任一条纹所在光面的法向量,其作为已知量。in, is the normal vector of the light surface where any fringe is located, which is taken as a known quantity.
由此,共可以得到针对每个条纹中线的点的4个方程。通过求解以上4个方程,可以获得三维重建后每个条纹中线上的每个点的三维坐标(u,v,w)。From this, a total of 4 equations can be obtained for the points of the center line of each fringe. By solving the above four equations, the three-dimensional coordinates (u, v, w) of each point on the centerline of each stripe after three-dimensional reconstruction can be obtained.
示例性地,根据各条纹中线上各个点在第一结构光图像中的二维图像坐标(x,y)和三维重建对应的三维坐标(u,v,w),能够求解出条纹中线上各点的二维图像坐标与三维重建对应的三维坐标的对应关系。该对应关系即为一阶变换关系,该一阶变换关系可以是三维重建差值函数,并用以下公式(3)表示:
(u,v,w)=f(x,y)     公式(3)
Exemplarily, according to the two-dimensional image coordinates (x, y) of each point on the centerline of each stripe in the first structured light image and the three-dimensional coordinates (u, v, w) corresponding to the three-dimensional reconstruction, each point on the centerline of the stripe can be solved. The corresponding relationship between the 2D image coordinates of the points and the 3D coordinates corresponding to the 3D reconstruction. The corresponding relationship is a first-order transformation relationship, which can be a three-dimensional reconstruction difference function, and is represented by the following formula (3):
(u, v, w) = f(x, y) Formula (3)
示例性地,根据上述公式(3),可以进一步得到第一结构光图像中各个点(包括除条纹中线上的点以外的其余点)的三维坐标。这样,在三维空间中,可以得到如图10所示的可视化三维点云模型,即第一模型。Exemplarily, according to the above formula (3), the three-dimensional coordinates of each point in the first structured light image (including other points except the point on the fringe center line) can be further obtained. In this way, in the three-dimensional space, a visualized three-dimensional point cloud model as shown in FIG. 10 , that is, the first model can be obtained.
类似地,也可以参考上述方案,获得针对同一三维目标的第四目标图像所对应的第二模型、与第三目标图像和第四目标图像不同的另一目标图像(例如第五目标图像)所对应的第三模型等等,以及与每个模型相对应的一阶变换关系,在此不再赘述。Similarly, referring to the above solution, the second model corresponding to the fourth target image of the same three-dimensional target, and another target image (such as the fifth target image) different from the third target image and the fourth target image can be obtained. The corresponding third model and so on, as well as the first-order transformation relationship corresponding to each model, will not be repeated here.
示例性地,通过上述步骤获得了第一模型和第一一阶变换关系之后,可以进一步对第一模型进行展开。可以将手指的三维体表形态视作一个类圆柱体,可以找到圆柱体的中心轴,并可以根据该中心轴对圆柱体的侧表面进行展开。在本申请实施例中,可以先找到针对三维目标的多个三维模型的公共坐标轴,然后依据该公共坐标轴进行展开。Exemplarily, after the first model and the first first-order transformation relationship are obtained through the above steps, the first model can be further expanded. The three-dimensional surface shape of the finger can be regarded as a cylinder-like body, the central axis of the cylinder can be found, and the side surface of the cylinder can be expanded according to the central axis. In the embodiment of the present application, the common coordinate axes of the multiple 3D models for the 3D object may be found first, and then unfolded according to the common coordinate axes.
示例性地,在将第一模型沿公共坐标轴展开,获得第一模型上各点的展开位置之前,方法400还可以包括:根据第一模型和第二模型中对应重建结构光图案中的同一重建组成单元的三维点形成三维点组,将三维点组的三维坐标进行融合,得到融合坐标;根据各重建组成单元的融合坐标确定公共坐标轴。 Exemplarily, before unfolding the first model along the common coordinate axis to obtain the unfolded position of each point on the first model, the method 400 may further include: according to the corresponding reconstruction in the first model and the second model, the same The 3D points of the reconstructed constituent units form a 3D point group, and the 3D coordinates of the 3D point group are fused to obtain the fused coordinates; the common coordinate axis is determined according to the fused coordinates of each reconstructed constituent unit.
示例性地,将三维点组的三维坐标进行融合,得到融合坐标以及根据各重建组成单元的融合坐标确定公共坐标轴包括:根据三维点组中各重建组成单元(例如条纹)的中线上的点的三维坐标,确定各重建组成单元的融合坐标;采用主成分分析的方法,根据各重建组成单元的融合坐标确定相应的公共坐标轴。Exemplarily, merging the 3D coordinates of the 3D point group to obtain the fused coordinates and determining the common coordinate axis according to the fused coordinates of each reconstructed constituent unit includes: according to the points on the midline of each reconstructed constituent unit (such as a stripe) in the 3D point group The three-dimensional coordinates of each reconstruction unit are determined to determine the fusion coordinates of each reconstruction unit; the method of principal component analysis is used to determine the corresponding common coordinate axis according to the fusion coordinates of each reconstruction unit.
示例性地,可以将上述步骤获取的三个模型中,分别查找每个重建组成单元中线上各个点的三维坐标。例如,在三个模型中提取与i=5的条纹对应的中线,可以理解,这三条中线两两之间有多个重叠的点。与条纹i=5相对应的三条中线上的点形成条纹i=5的三维点组。Exemplarily, among the three models obtained in the above steps, the three-dimensional coordinates of each point on the midline of each reconstruction component unit can be searched respectively. For example, extracting the midline corresponding to the i=5 stripes in the three models, it can be understood that there are multiple overlapping points between any two of the three midlines. Points on the three median lines corresponding to stripe i=5 form a three-dimensional point group of stripe i=5.
可以对位于同一重建组成单元的中线上的点进行去重操作,获得该重建组成单元的融合坐标。示例性地,可以采用求平均值的方式对于每两个模型中的重叠区域的点的坐标进行融合。例如在重叠的中线上,每个模型各对应有10个点,并且两个模型上的10个点可以根据距离最近的关系一一关联,即可以得到10个三维点对,可以分别求得每个三维点对的中点,将每个中点作为该重叠区域的融合后的点。对于条纹i=5来说,重叠区域的融合后的点加上三个模型上位于该条纹的非重叠区域的原始的点可以形成去重后的点。条纹i=5的去重后的点的坐标可以视为条纹i=5的融合坐标。A deduplication operation can be performed on the points located on the midline of the same reconstruction unit to obtain the fusion coordinates of the reconstruction unit. Exemplarily, the coordinates of the points in the overlapping areas in each two models may be fused by means of averaging. For example, on the overlapping midline, each model has 10 points, and the 10 points on the two models can be associated one by one according to the closest relationship, that is, 10 three-dimensional point pairs can be obtained, and each The midpoints of three-dimensional point pairs, and each midpoint is used as the fused point of the overlapping area. For fringe i=5, the fused points in the overlapping area plus the original points in the non-overlapping area of the three models on the fringe can form the deduplicated point. The coordinates of the deduplicated points of the stripe i=5 can be regarded as the fused coordinates of the stripe i=5.
在另一示例中,还可以以每个重叠区域的交叉中心点为边界进行去重,例如对于图11a中的AA1、B1B,可以以交叉点O为边界,分别去掉重叠区域B1O和重叠区域OA1的点,得到图11b所示的去重后的AB。去重后的AB上的点的坐标为融合坐标。In another example, deduplication can also be performed with the intersection center point of each overlapping area as the boundary, for example, for AA 1 and B 1 B in Figure 11a, the overlapping area B 1 can be removed with the intersection point O as the boundary O and points in the overlapping area OA 1 , get the deduplicated AB shown in Figure 11b. The coordinates of the points on AB after deduplication are fusion coordinates.
去重操作可以缓解手指放偏时公共坐标轴确定不准确的问题,进而提高图像处理的质量。The de-duplication operation can alleviate the problem of inaccurate determination of the common coordinate axis when the finger is placed off-center, thereby improving the quality of image processing.
如上所述,对位于多个三维模型中的同一重建组成单元的中线上的点进行去重操作之后,可以得到去重后的组合中线(三条中线组合的结果),该组合中线上的点的坐标为与该重建组成单元相对应的融合坐标。可以提取去重后的组合中线上位于预设位置的点的三维坐标,并采用主成分分析的方法确定相应的公共坐标轴。预设位置的点,例如可以包括去重后的组合中线的左右两个端点。示例性地,可以一次性提取每条去重后的组合中线的两端端点的三维坐标,并将其输入主成分分析算法模型中,得到公共坐标轴。示例性地,预设位置的点,还可以包括去重后的组合中线的左右两个端点和该组合中线的中点,并将所有组合中线上的三个点的三维坐标一起输入主成分分析算法模型中,得到公共坐标轴。当然,还可以选用去重后的组合中线上的其他合适的点进行分析,来获得公共坐标轴。As mentioned above, after the deduplication operation is performed on the points on the midline of the same reconstruction unit in multiple 3D models, the deduplicated combined midline (the result of the combination of three midlines) can be obtained, and the points on the combined midline The coordinates are the fused coordinates corresponding to the reconstructed constituent units. The three-dimensional coordinates of the points on the preset position on the combined center line after deduplication can be extracted, and the corresponding common coordinate axes can be determined by the method of principal component analysis. The points at the preset positions may include, for example, the left and right endpoints of the combined center line after deduplication. Exemplarily, the three-dimensional coordinates of the endpoints at both ends of each deduplicated combined median line may be extracted at one time, and input into the principal component analysis algorithm model to obtain a common coordinate axis. Exemplarily, the point at the preset position may also include the left and right endpoints of the combined midline after deduplication and the midpoint of the combined midline, and input the three-dimensional coordinates of all three points on the combined midline into principal component analysis In the algorithm model, the common coordinate axes are obtained. Of course, other suitable points on the combined median line after deduplication can also be selected for analysis to obtain the common coordinate axis.
在获取了公共坐标轴之后,可以将每个三维模型沿该公共坐标轴进行展开。首先,可以以公共坐标轴为基准,重新建立三维直角坐标系,每个 点的新的坐标表示为(u’,v’,w’)。下面参考图12,图12为三维指纹模型展开方法的一个示意图。可以理解,在理想状态下,位于新的三维坐标系中的每个条纹中线上的各个点的w’轴的坐标相等。可以基于此原理将每个条纹依次沿公共坐标轴展开。After obtaining the common coordinate axis, each three-dimensional model can be expanded along the common coordinate axis. First, the three-dimensional Cartesian coordinate system can be re-established based on the common coordinate axis, each The new coordinates of the point are denoted as (u', v', w'). Referring to FIG. 12 below, FIG. 12 is a schematic diagram of a method for unfolding a three-dimensional fingerprint model. It can be understood that in an ideal state, the coordinates of the w' axis of each point on the center line of each stripe in the new three-dimensional coordinate system are equal. Based on this principle, each stripe can be unfolded sequentially along the common coordinate axis.
展开方式有多种,在此仅以一种具体展开方式为例进行说明。There are many ways to expand, and only one specific way to expand is used as an example for illustration.
示例性地,可以过v’轴和w’轴做一个平面l,并且可以得到该平面l与各个重建组成单元的多个交点。例如,图12中i=0的条纹中线与平面l交于O1点(u’O1,v’O1,w’O1),容易理解,u’O1=0。示例性地,还可以过O1点做一个垂直于v’轴的平面l’,并将该平面作为第一模型上各点展开的基准面,即第一模型沿公共坐标轴展开之后的所有点均可以落在该基准面上。由于在该基准面上,各个点的v’轴坐标相等,因此展开后各点的坐标可以用(p,q)表示。Exemplarily, a plane l can be constructed through the v' axis and the w' axis, and multiple intersection points of the plane l and each reconstruction component unit can be obtained. For example, in FIG. 12 , the stripe center line with i=0 intersects the plane l at point O 1 (u' O1 , v' O1 , w' O1 ), and it is easy to understand that u' O1 =0. Exemplarily, it is also possible to make a plane l' perpendicular to the v' axis through point O1 , and use this plane as the datum plane for the expansion of each point on the first model, that is, all the points after the first model is expanded along the common coordinate axis All points can fall on this datum. Since the coordinates of the v' axis of each point are equal on this datum plane, the coordinates of each point after expansion can be represented by (p, q).
示例性地,对于图12中的O1点,其展开后的位置点依然是O1点不变,可以将其坐标表示为(pO1,qO1)。可以理解,pO1=u’O1,qO1=w’O1。在i=0的条纹中线上,沿O1点向右取一点A1,找到A1在展开基准面中对应点的位置A1’(p1,q1)。容易理解,q1=qO1,并且,当A1无限接近于O1时,圆弧O1A1的长度近似等于O1A1的直线距离p1。因此,可以得到以下计算公式(4):
Exemplarily, for the point O 1 in FIG. 12 , the expanded position is still the same as the point O 1 , and its coordinates can be expressed as (p O1 , q O1 ). It can be understood that p O1 = u' O1 , q O1 = w' O1 . On the fringe centerline where i=0, take a point A 1 to the right along point O 1 , and find the position A 1 '(p 1 , q 1 ) of the corresponding point of A 1 in the development datum plane. It is easy to understand that q 1 =q O1 , and when A 1 is infinitely close to O 1 , the length of the arc O 1 A 1 is approximately equal to the straight-line distance p 1 of O 1 A 1 . Therefore, the following calculation formula (4) can be obtained:
其中,α表示像素系数,α>1。该像素系数可以根据展开图的像素密度进行设定。Wherein, α represents a pixel coefficient, and α>1. The pixel coefficient can be set according to the pixel density of the expanded image.
以此类推,可以依次算出i=0的条纹中线上每个点在展开基准面中的位置,进而得到第一模型中所有点在展开基准面中的位置坐标(p,q)。By analogy, the position of each point on the fringe center line with i=0 in the development datum plane can be calculated in turn, and then the position coordinates (p, q) of all points in the first model on the development datum plane can be obtained.
并且,根据第一模型中各个点的三维坐标(u,v,w)与相应的展开位置坐标(p,q),可以得到两者之间的对应关系,该对应关系即为二阶变换关系,可以用以下公式(5)表示:
(p,q)=g(u,v,w)     公式(5)
Moreover, according to the three-dimensional coordinates (u, v, w) of each point in the first model and the corresponding unfolded position coordinates (p, q), the corresponding relationship between the two can be obtained, and the corresponding relationship is the second-order transformation relationship , which can be expressed by the following formula (5):
(p, q) = g(u, v, w) formula (5)
示例性地,第一变换关系包括一阶变换关系和二阶变换关系。进一步地,根据公式(3)和公式(5),可以得到第一结构光图像上的每个点与展开后的位置点的对应关系。该对应关系即为第一变换关系,可以用以下公式(6)表示:
(p,q)=F(x,y)     公式(6)
Exemplarily, the first transformation relationship includes a first-order transformation relationship and a second-order transformation relationship. Further, according to formula (3) and formula (5), the corresponding relationship between each point on the first structured light image and the unfolded position point can be obtained. This corresponding relationship is the first transformation relationship, which can be expressed by the following formula (6):
(p, q) = F(x, y) formula (6)
基于同样的原理,还可以得到针对第四目标图像相应的第二一阶变换关系和第二二阶变换关系以及第二变换关系,本领域普通技术人员容易理解该实现方法,在此不再赘述。 Based on the same principle, the second first-order transformation relationship, the second second-order transformation relationship, and the second transformation relationship corresponding to the fourth target image can also be obtained. Those of ordinary skill in the art can easily understand this implementation method, and will not repeat them here. .
示例性地,步骤S430中,在获取了第一变换关系之后,可以直接根据第一变换关系对第一照明光图像上的像素点进行变换,得到第一待拼接数据。可以理解,在第一照明光图像中可以包含若干个像素点,例如3072*2048个像素点。可以将各个像素点的位置坐标代入上述公式(6),得到每个像素点的展开位置坐标,据此将各个像素点的像素值一一变换至展开位置处,由此得到与第一照明光图像对应的展开图像,进而得到第一待拼接数据。第二待拼接数据的获取方式与第一待拼接数据类似,不再赘述。Exemplarily, in step S430, after the first transformation relationship is acquired, the pixel points on the first illumination light image may be directly transformed according to the first transformation relationship to obtain the first data to be spliced. It can be understood that the first illumination light image may contain several pixels, for example, 3072*2048 pixels. The position coordinates of each pixel point can be substituted into the above formula (6) to obtain the unfolded position coordinates of each pixel point, and accordingly transform the pixel values of each pixel point to the unfolded position one by one, thus obtaining the Expand the image corresponding to the image to obtain the first data to be spliced. The acquisition method of the second data to be spliced is similar to that of the first data to be spliced, and will not be repeated here.
上述技术方案可以根据计算得到的变换关系,直接将二维的照明光图像转换至二维展开图像,方法简单直观,且无需占用过多的计算资源,快速获得高精度的展开图像数据,显著提高图像处理的效率。The above technical solution can directly convert the two-dimensional illumination light image into a two-dimensional unfolded image according to the calculated transformation relationship. Image processing efficiency.
在另一示例中,第一变换关系和第二变换关系为展开变换关系,第一待拼接数据为第一展开图像,第二待拼接数据为第二展开图像;基于第一重建图像图案中包含的重建组成单元确定第一变换关系,包括:根据第一重建图像图案中包含的重建组成单元的位置进行三维重建,获得第一模型,根据第一重建图像图案中包含的重建组成单元的位置和第一模型得到第一一阶变换关系;将第一模型沿公共坐标轴展开,获得第一模型上各点的展开位置;根据第一模型和第一模型上各点的展开位置得到第一二阶变换关系;其中,第一变换关系包括第一一阶变换关系和第一二阶变换关系。此外,基于第二重建图像图案中包含的重建组成单元确定第二变换关系,包括:根据第二重建图像图案中包含的重建组成单元的位置进行三维重建,获得第二模型,根据第二重建图像图案中包含的重建组成单元的位置和第二模型得到第二一阶变换关系;将第二模型沿公共坐标轴展开,获得第二模型上各点的展开位置;根据第二模型和第二模型上各点的展开位置得到第二二阶变换关系;其中,第二变换关系包括第二一阶变换关系和第二二阶变换关系。In another example, the first transformation relationship and the second transformation relationship are expansion transformation relationships, the first data to be spliced is the first expansion image, and the second data to be spliced is the second expansion image; based on the first reconstructed image pattern contains Determining the first transformation relationship of the reconstructed constituent units includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the first reconstructed image pattern to obtain the first model, and according to the positions and The first-order transformation relationship is obtained from the first model; unfold the first model along the common coordinate axis to obtain the unfolded position of each point on the first model; obtain the first-two An order transformation relationship; wherein, the first transformation relationship includes a first first-order transformation relationship and a first second-order transformation relationship. In addition, determining the second transformation relationship based on the reconstructed constituent units contained in the second reconstructed image pattern includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the second reconstructed image pattern to obtain a second model, and according to the second reconstructed image The position of the reconstructed unit contained in the pattern and the second model obtain the second first-order transformation relationship; expand the second model along the common coordinate axis to obtain the expanded position of each point on the second model; according to the second model and the second model The expanded position of each point above obtains the second second-order transformation relationship; wherein, the second transformation relationship includes the second first-order transformation relationship and the second second-order transformation relationship.
这种情况下,根据第一变换关系对第一照明光图像进行变换,得到第一待拼接数据可以包括:根据第一一阶变换关系对第一照明光图像进行一阶变换,得到与第一照明光图像相对应的三维模型;根据第一二阶变换关系对与第一照明光图像相对应的三维模型进行二阶变换,得到与第一照明光图像相对应的第一展开图像。根据第二变换关系对第二照明光图像进行变换,得到第二待拼接数据可以包括:根据第二一阶变换关系对第二照明光图像进行一阶变换,得到与第二照明光图像相对应的三维模型;根据第二二阶变换关系对与第二照明光图像相对应的三维模型进行二阶变换,得到与第二照明光图像相对应的第二展开图像。In this case, transforming the first illumination light image according to the first transformation relationship to obtain the first data to be spliced may include: performing a first-order transformation on the first illumination light image according to the first first-order transformation relationship to obtain the first A three-dimensional model corresponding to the illumination light image; performing a second-order transformation on the three-dimensional model corresponding to the first illumination light image according to the first second-order transformation relationship to obtain a first expanded image corresponding to the first illumination light image. Transforming the second illumination light image according to the second transformation relationship to obtain the second data to be spliced may include: performing a first-order transformation on the second illumination light image according to the second first-order transformation relationship to obtain the data corresponding to the second illumination light image. the three-dimensional model; according to the second second-order transformation relationship, the second-order transformation is performed on the three-dimensional model corresponding to the second illumination light image to obtain a second expanded image corresponding to the second illumination light image.
上述通过第一一阶变换关系和第一二阶变换关系计算第一变换关系的实施例仅是示例而非对本申请的限制。在另一个实施例中,第一变换关系 可以包括第一一阶变换关系和第一二阶变换关系。在这种情况下,同样可以通过上述三维重建获得第一模型和将第一模型展开的方式来确定第一一阶变换关系和第一二阶变换关系。但是此时,可以无需计算公式(6)并基于公式(6)将第一照明光图像上的像素点直接变换为第一展开图像,而是可以首先基于第一一阶变换关系,即公式(3),将第一照明光图像上的像素点变换到三维空间,然后基于第一二阶变换关系,即公式(5),将第一照明光图像上的像素点继续从三维空间变换到二维展开图像上。即,可以通过两次间接变换将第一照明光图像上的像素点变换到二维展开图像上,以获得第一展开图像。第二展开图像的获取方式与第一展开图像类似,本文不再赘述。The foregoing embodiment of calculating the first transformation relationship by using the first first-order transformation relationship and the first second-order transformation relationship is merely an example rather than a limitation to the present application. In another embodiment, the first transformation relationship It may include a first first-order transformation relationship and a first second-order transformation relationship. In this case, the first first-order transformation relationship and the first second-order transformation relationship can also be determined by obtaining the first model through the above-mentioned three-dimensional reconstruction and unfolding the first model. But at this time, it is not necessary to calculate the formula (6) and directly transform the pixels on the first illumination light image into the first expanded image based on the formula (6), but firstly based on the first first-order transformation relationship, that is, the formula ( 3), transform the pixel points on the first illumination light image to three-dimensional space, and then based on the first second-order transformation relationship, that is, formula (5), continue to transform the pixel points on the first illumination light image from three-dimensional space to two-dimensional space Dimensionally expanded image. That is, the pixel points on the first illumination light image can be transformed to the two-dimensional expanded image by two indirect transformations to obtain the first expanded image. The acquisition method of the second expanded image is similar to that of the first expanded image, which will not be repeated herein.
这种方案的中间过程还可以向用户展示照明光图像(例如指纹图像)所对应的三维重建效果,既方便用户查错、修改,也可以提供多样化的视觉图像,满足用户的多种需求。The intermediate process of this solution can also show the user the 3D reconstruction effect corresponding to the illumination light image (such as a fingerprint image), which is convenient for the user to check and modify, and can also provide a variety of visual images to meet the various needs of the user.
示例性地,步骤S430中,第一变换关系和第二变换关系对应的变换为三维重建,第一待拼接数据为第一目标模型,第二待拼接数据为第二目标模型;基于第一重建图像图案中包含的重建组成单元确定第一变换关系,包括:根据第一重建图像图案中包含的重建组成单元的位置进行三维重建,以获得第一模型;根据第一重建图像图案中包含的重建组成单元的位置和第一模型得到第一变换关系;根据第一变换关系对第一照明光图像进行变换,得到第一待拼接数据,包括:根据第一变换关系对第一照明光图像进行一阶变换,得到第一目标模型;基于第二重建图像图案中包含的重建组成单元确定第二变换关系,包括:根据第二重建图像图案中包含的重建组成单元的位置进行三维重建,以获得第二模型;根据第二重建图像图案中包含的重建组成单元的位置和第二模型得到第二变换关系;根据第二变换关系对第二照明光图像进行变换,得到第二待拼接数据,包括:根据第二变换关系对第二照明光图像进行一阶变换,得到第二目标模型。Exemplarily, in step S430, the transformation corresponding to the first transformation relationship and the second transformation relationship is a three-dimensional reconstruction, the first data to be spliced is the first target model, and the second data to be spliced is the second target model; based on the first reconstruction Determining the first transformation relationship for the reconstructed constituent units contained in the image pattern includes: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the first reconstructed image pattern to obtain a first model; The position of the constituent unit and the first model obtain the first transformation relationship; transform the first illumination light image according to the first transformation relationship to obtain the first data to be spliced, including: performing a transformation on the first illumination light image according to the first transformation relationship order transformation to obtain the first target model; determining the second transformation relationship based on the reconstructed constituent units contained in the second reconstructed image pattern, including: performing three-dimensional reconstruction according to the positions of the reconstructed constituent units contained in the second reconstructed image pattern, to obtain the first The second model; obtain a second transformation relationship according to the position of the reconstruction component unit contained in the second reconstruction image pattern and the second model; transform the second illumination light image according to the second transformation relationship to obtain the second data to be spliced, including: A first-order transformation is performed on the second illumination light image according to the second transformation relationship to obtain a second object model.
与上述描述中将照明光图像变换成展开图像的示例不同,在本示例中,可以不做针对三维模型(即第一模型和第二模型)的展开处理,而是直接将照明光图像包含的像素数据也变换到三维空间,获得第一目标模型和第二目标模型。然后,可以直接将第一目标模型和第二目标模型进行拼接。例如,在得到了上述第一模型和公式(3)之后,直接根据公式(3)将第一照明光图像的各个像素点的像素值映射至第一模型所在的三维空间中,得到包含第一照明光图像中的像素信息的第一目标模型。第二照明光图像也可以做类似变换,获得第二目标模型。Different from the example described above where the illumination light image is transformed into an unfolded image, in this example, the unwrapping process for the 3D model (namely the first model and the second model) may not be performed, but the illumination light image containing The pixel data is also transformed into three-dimensional space, resulting in a first object model and a second object model. Then, the first object model and the second object model can be directly spliced. For example, after obtaining the above-mentioned first model and formula (3), directly map the pixel values of each pixel point of the first illumination light image to the three-dimensional space where the first model is located according to formula (3), and obtain the A first object model of pixel information in an illumination light image. A similar transformation can also be performed on the second illumination light image to obtain a second target model.
示例性地,在步骤S440之前,图像拼接方法400还可以包括:对第一结构光图像进行检测,得到第一结构光图像中拼接组成单元的位置;对第一结构光图像中拼接组成单元的位置进行第一变换关系对应的运算,得到第一映射位置;对第二结构光图像进行检测,得到第二结构光图像中拼接 组成单元的位置;对第二结构光图像中拼接组成单元的位置进行第二变换关系对应的运算,得到第二映射位置。Exemplarily, before step S440, the image mosaic method 400 may further include: detecting the first structured light image to obtain the position of the mosaic component unit in the first structured light image; Perform the operation corresponding to the first transformation relationship on the position to obtain the first mapping position; detect the second structured light image to obtain the stitching in the second structured light image The position of the component unit; the operation corresponding to the second transformation relationship is performed on the position of the spliced component unit in the second structured light image to obtain the second mapping position.
示例性地,下面以图7a为第一结构光图像、以图7b为第二结构光图像为例进行描述。可以采用Unet模型分别检测识别图7a和图7b中的每个圆形散斑和方形散斑,并得到每个圆形散斑和方形散斑的位置坐标。然后可以将每个散斑的位置坐标代入前述公式(3)或者公式(6)中,得到每个散斑在三维模型或者展开图像所在图像平面中的位置坐标。Exemplarily, the following description will be made by taking FIG. 7a as an example of the first structured light image and FIG. 7b as an example. The Unet model can be used to detect and identify each circular speckle and square speckle in Fig. 7a and Fig. 7b respectively, and obtain the position coordinates of each circular speckle and square speckle. Then, the position coordinates of each speckle can be substituted into the aforementioned formula (3) or formula (6), to obtain the position coordinates of each speckle in the image plane where the three-dimensional model or expanded image is located.
在本实施例中,首先从结构光图像中检测出拼接组成单元的位置,然后只对其位置信息进行变换运算,即可获得对应的映射拼接组成单元的位置。这种方案只需进行位置运算,而无需将结构光图像的像素值也映射到展开图像所在的图像平面上,因此计算量比较小。In this embodiment, the position of the mosaic component unit is firstly detected from the structured light image, and then only the position information is transformed to obtain the position of the corresponding mapped mosaic component unit. This solution only needs to perform position calculations, and does not need to map the pixel values of the structured light image to the image plane where the unfolded image is located, so the amount of calculation is relatively small.
示例性地,图像拼接方法400还包括:对第一结构光图像进行第一变换关系对应的运算,得到第一映射结构光图像;对第一映射结构光图像进行映射拼接组成单元检测,得到第一映射位置;对第二结构光图像进行第二变换关系对应的运算,得到第二映射结构光图像;对第二映射结构光图像进行映射拼接组成单元检测,得到第二映射位置。Exemplarily, the image stitching method 400 further includes: performing an operation corresponding to the first transformation relationship on the first structured light image to obtain a first mapped structured light image; performing mapping and stitching composition unit detection on the first mapped structured light image to obtain a first mapped structured light image A mapping position; performing operations corresponding to the second transformation relationship on the second structured light image to obtain a second mapped structured light image; performing mapping and splicing composition unit detection on the second mapped structured light image to obtain a second mapping position.
示例性地,由于结构光图像中既包括重建结构光图案信息又包括拼接结构光信息,因此还可以根据得到的第一变换关系和第二变换关系对第一结构光图像和第二结构光图像进行图像转换,将其中的拼接结构光图像信息一起映射至新的结构光图像上,再在映射结构光图像上检测每个映射拼接组成单元的位置。再次参考图7a-7c,可以将图7a-7c中的三个结构光图像中的每个像素点的坐标代入公式(3)或者公式(6)中,得到三个包括拼接结构光图案的展开图或者三个包括拼接结构光图案的三维模型,并将结构光图像上的像素点的像素值也映射到对应的展开图(映射结构光图像)和三维模型上。然后在展开图或三维模型上进行散斑检测,得到每个散斑的位置坐标,即可获得映射拼接组成单元的映射位置。Exemplarily, since the structured light image includes both the reconstructed structured light pattern information and the spliced structured light information, the first structured light image and the second structured light image can also be transformed according to the obtained first transformation relationship and the second transformation relationship Carry out image conversion, map the mosaic structured light image information together to a new structured light image, and then detect the position of each mapped mosaic component unit on the mapped structured light image. Referring again to Figures 7a-7c, the coordinates of each pixel in the three structured light images in Figures 7a-7c can be substituted into formula (3) or formula (6), and three expansions including stitched structured light patterns can be obtained The graph or three 3D models including spliced structured light patterns, and the pixel values of the pixel points on the structured light image are also mapped to the corresponding expanded graph (mapped structured light image) and 3D model. Then speckle detection is performed on the unfolded image or the 3D model to obtain the position coordinates of each speckle, and then the mapping position of the constituent units of the mapping splicing can be obtained.
示例性地,第一变换关系和第二变换关系对应的变换为三维重建,第一待拼接数据为第一目标模型,第二待拼接数据为第二目标模型;图像拼接方法400还可以包括步骤S460,将整体数据沿展公共坐标轴展开,得到整体展开图像。Exemplarily, the transformation corresponding to the first transformation relationship and the second transformation relationship is three-dimensional reconstruction, the first data to be spliced is the first target model, and the second data to be spliced is the second target model; the image stitching method 400 may also include the steps S460. Expand the overall data along the common coordinate axis to obtain an overall expanded image.
示例性地,对于在步骤S430不做针对三维模型的展开处理的情况,可以通过步骤S440和步骤S450,将第一目标模型和第二目标模型进行拼接,得到包括照明光图像中的像素信息的三维整体数据,然后再对三维整体数据做展开处理。步骤S460做展开处理的原理与上述将三维模型展开到二维平面的方式类似,可以将整体数据上的各个像素点一次性根据二阶变换关系进行展开,得到整体展开图像。Exemplarily, for the case where the unfolding process for the 3D model is not performed in step S430, the first object model and the second object model can be spliced through steps S440 and S450 to obtain the pixel information including the illumination light image The three-dimensional overall data, and then expand the three-dimensional overall data. The principle of the unfolding process in step S460 is similar to the above method of unfolding the 3D model to a 2D plane. Each pixel in the overall data can be unfolded according to the second-order transformation relationship at one time to obtain the overall unfolded image.
示例性地,步骤S440包括以下至少一者:第一查找操作、第二查找操 作和检查操作;其中,Exemplarily, step S440 includes at least one of the following: a first search operation, a second search operation operation and inspection operations; where,
第一查找操作包括:针对任一特定第一映射拼接组成单元,从第二映射拼接组成单元中查找距离特定第一映射拼接组成单元最近的第二映射拼接组成单元作为与特定第一映射拼接组成单元匹配的第二映射拼接组成单元;The first search operation includes: for any specific first mapping mosaic component unit, searching the second mapping mosaic component unit closest to the specific first mapping mosaic component unit from the second mapping mosaic component unit as the specific first mapping mosaic component unit The second mapping of unit matching is spliced to form a unit;
第二查找操作包括:在第一映射拼接组成单元中,以第一当前映射拼接组成单元为第一起点查找与第一起点满足相对位置条件的目标第一映射拼接组成单元;在第二映射拼接组成单元中,以第二当前映射拼接组成单元为第二起点查找与起点满足相对位置条件的目标第二映射拼接组成单元,将目标第二映射拼接组成单元作为与目标第一映射拼接组成单元匹配的第二映射拼接组成单元;其中,第一当前映射拼接组成单元与第二当前映射拼接组成单元匹配;The second search operation includes: in the first mapping and splicing composition unit, use the first current mapping and splicing composition unit as the first starting point to search for the target first mapping and splicing composition unit that meets the relative position condition with the first starting point; In the composition unit, use the second current mapping and splicing composition unit as the second starting point to search for the target second mapping and splicing composition unit that meets the relative position condition with the starting point, and use the target second mapping and splicing composition unit as the target first mapping and splicing composition unit to match The second mapping splicing composition unit; wherein, the first current mapping splicing composition unit matches the second current mapping splicing composition unit;
检查操作包括:计算相互匹配的第一映射拼接组成单元和第二映射拼接组成单元的映射位置之间的距离,以获得匹配误差;将匹配误差满足要求的相互匹配的第一映射拼接组成单元和第二映射拼接组成单元确定为最终匹配的第一映射拼接组成单元和第二映射拼接组成单元;匹配关系中包括的多个映射拼接组成单元对为最终匹配的多个映射拼接组成单元对。The checking operation includes: calculating the distance between the mapping positions of the first mapped mosaic unit that matches each other and the mapping position of the second mapped mosaic unit to obtain a matching error; matching the first mapped mosaic unit that matches each other and the The second mapping and splicing component unit is determined as the final matching first mapping and splicing component unit and the second mapping and splicing component unit; the multiple mapping and splicing component unit pairs included in the matching relationship are the final matching multiple mapping and splicing component unit pairs.
示例性地,可以基于每个映射拼接组成单元的映射位置,例如坐标值,在两个目标图像所对应的映射拼接组成单元之间查找满足预设距离要求的映射拼接组成单元进行匹配。预设距离要求可以是例如距离最近。映射拼接组成单元可以是例如图6中示出的位于每个条纹上的亮/暗的圆形散斑或者方形散斑。仅示例性地,可以将图6示出的结构光图案投射在待采集手指上,获得两个目标图像,将两个目标图像输入Unet模型中,可以识别每个散斑的位置坐标,将该坐标代入上述示例中的公式(3)或者公式(6),可以将每个散斑映射至相应的展开图上或者三维模型上。Exemplarily, based on the mapping position of each mapping and mosaic component unit, such as the coordinate value, the mapping and mosaic component units corresponding to the two target images may be searched for matching with a preset distance requirement. The preset distance requirement may be, for example, the shortest distance. The mapping mosaic constituent unit may be, for example, a bright/dark circular speckle or a square speckle located on each stripe as shown in FIG. 6 . For example only, the structured light pattern shown in FIG. 6 can be projected on the finger to be collected to obtain two target images, and the two target images can be input into the Unet model, and the position coordinates of each speckle can be identified, and the By substituting the coordinates into formula (3) or formula (6) in the above example, each speckle can be mapped to a corresponding unfolded image or a three-dimensional model.
示例性地,第一查找操作可以包括如下操作。在基于第一结构光图像中的拼接组成单元进行位置映射所确定的第一映射拼接组成单元中,找出任一特定第一映射拼接组成单元,例如与图6中亮方形散斑610相对应的第一映射散斑。然后,从基于第二结构光图像中的拼接组成单元进行位置映射所确定的第二映射拼接组成单元中查找与特定第一映射拼接组成单元距离最近的第二映射拼接组成单元,例如距离第一映射散斑最近的亮方形散斑(第二映射散斑),并记录其坐标。随后,可以判断上述两个映射拼接组成单元(第一映射散斑和第二映射散斑)是否匹配。示例性而非限制性地,两个映射拼接组成单元是否匹配可以通过判断二者之间的距离是否满足第一预设距离要求,例如是否小于第一预定距离阈值,来进行判断。第一预定距离阈值可以是任何合适的值。满足第一预设距离要求的第一映射拼接组成单元和第二映射拼接组成单元可以视为是匹配的。示例性地,可 以以此类推,查找多对匹配的映射拼接组成单元。Exemplarily, the first search operation may include the following operations. Among the first mapped mosaic units determined based on the position mapping of the mosaic units in the first structured light image, find any specific first mapped mosaic unit, for example, corresponding to the bright square speckle 610 in FIG. 6 The first mapped speckle. Then, from the second mapping mosaic constituent units determined based on the position mapping of the mosaic constituent units in the second structured light image, the second mapping mosaic constituent unit with the closest distance to the specific first map mosaic constituent unit is searched, for example, the distance from the first The closest bright square speckle (second mapped speckle) of the speckle is mapped, and its coordinates are recorded. Subsequently, it may be judged whether the above two mapping mosaic units (the first mapping speckle and the second mapping speckle) match. Exemplarily and not limitatively, whether two map stitching constituent units match may be determined by judging whether the distance between them meets a first preset distance requirement, for example, whether it is smaller than a first predetermined distance threshold. The first predetermined distance threshold may be any suitable value. The first mapping mosaic component unit and the second mapping mosaic component unit that meet the first preset distance requirement may be considered as matching. Exemplarily, it can be By analogy, multiple pairs of matching mapping splicing constituent units are found.
此外,还可以执行第二查找操作。第一查找操作和第二查找操作可以共存。在第二查找操作中,第一起点可以是当前已经匹配好的任一对映射拼接组成单元中的第一映射拼接组成单元(即第一当前映射拼接组成单元)。针对该第一起点,可以查找与第一起点满足相对位置条件的目标第一映射拼接组成单元。相对位置条件可以包括与第一起点之间的距离满足第二预设距离要求,或者距离第一起点最近,或者沿着预定方向距离第一起点最近,等等。第二预设距离要求与上述第一预设距离要求可以相同或不同。例如,第二预设距离要求可以是小于或等于第二预定距离阈值,第二预定距离阈值可以是任何合适的值。例如,可以将位于任一条纹中心处的散斑(可以理解,此处是指位于中心处的第一映射散斑)作为初始的第一起点。从各第二映射散斑中找出与此时的第一起点匹配的第二映射散斑(查找匹配的映射散斑的方式可以可选地采用第一查找操作)。随后,可以继续从位于中心处的第一映射散斑开始找下一个第一映射散斑,该下一个第一映射散斑可以是沿着预定方向距离位于中心处的第一映射散斑最近的第一映射散斑,随后基于找到的第一映射散斑进行散斑的匹配,找到与该第一映射散斑匹配的第二映射散斑。随后,将新的匹配好的第一映射散斑作为新的第一起点,继续找下一个第一映射散斑并继续匹配。以上操作可以循环执行。In addition, a second lookup operation may also be performed. The first lookup operation and the second lookup operation can coexist. In the second search operation, the first starting point may be the first mapping combination unit (ie, the first current mapping combination unit) in any pair of currently matched mapping combination units. For the first starting point, it is possible to search for a target first mapping splicing component unit that satisfies a relative position condition with the first starting point. The relative position condition may include that the distance from the first starting point satisfies a second preset distance requirement, or is the closest to the first starting point, or is the closest to the first starting point along a predetermined direction, and so on. The second preset distance requirement may be the same as or different from the above-mentioned first preset distance requirement. For example, the second preset distance requirement may be less than or equal to a second predetermined distance threshold, and the second predetermined distance threshold may be any suitable value. For example, the speckle located at the center of any fringe (it can be understood that it refers to the first mapped speckle located at the center) may be used as the initial first starting point. A second mapped speckle that matches the first starting point at this time is found from each of the second mapped speckles (a manner of searching for a matched mapped speckle may optionally be a first search operation). Subsequently, the next first mapped speckle may continue to be found from the first mapped speckle located at the center, and the next first mapped speckle may be the closest to the first mapped speckle located at the center along a predetermined direction The first mapped speckle, and then performs speckle matching based on the found first mapped speckle, and finds a second mapped speckle that matches the first mapped speckle. Subsequently, using the newly matched first mapped speckle as a new first starting point, continue to find the next first mapped speckle and continue matching. The above operations can be performed cyclically.
此外,还可以执行第三查找操作。根据上述第一映射拼接组成单元的第一映射位置和上述第二映射拼接组成单元的第二映射位置,对各第一映射拼接组成单元和各第一映射拼接组成单元进行全局匹配,使得整体匹配误差最小;上述整体匹配误差为具有匹配关系的各第一映射拼接组成单元和第二映射拼接组成单元的映射位置之差的和。例如,共有5个第一映射拼接组成单元和4个第二映射拼接组成单元,当1-3号第一映射拼接组成单元和1-3号第二映射拼接组成单元分别匹配时,将1号第一映射拼接组成单元和1号第二映射拼接组成单元的映射位置之间距离的平方与2号第一映射拼接组成单元和2号第二映射拼接组成单元的映射位置之间距离的平方、3号第一映射拼接组成单元和3号第二映射拼接组成单元的映射位置之间距离的平方相加,得到整体匹配误差;当1-3号第一映射拼接组成单元和2-4号第二映射拼接组成单元分别匹配时,将1号第一映射拼接组成单元和2号第二映射拼接组成单元的映射位置之间距离的平方与2号第一映射拼接组成单元和3号第二映射拼接组成单元的映射位置之间距离的平方、3号第一映射拼接组成单元和4号第二映射拼接组成单元的映射位置之间距离的平方相加,得到整体匹配误差。找到整体匹配误差最小时的匹配关系即为确定出的第一映射拼接组成单元和第二映射拼接组成单元之间的匹配关系。In addition, a third lookup operation may also be performed. According to the first mapping position of the above-mentioned first mapping and splicing component unit and the second mapping position of the above-mentioned second mapping and splicing component unit, perform global matching on each first mapping and splicing component unit and each first mapping and splicing component unit, so that the overall matching The error is the smallest; the above-mentioned overall matching error is the sum of the mapping position differences between the first mapping and splicing constituent units and the second mapping and splicing constituent units having a matching relationship. For example, there are 5 first mapping mosaic units and 4 second mapping mosaic units, when the No. 1-3 first mapping mosaic units and No. 1-3 second mapping mosaic units match respectively, No. 1 The square of the distance between the mapping positions of the first mapping mosaic constituent unit and the No. 1 second mapping mosaic constituent unit and the distance between the mapping positions of the No. 2 first mapping mosaic constituent unit and the No. 2 second mapping mosaic constituent unit, Add the squares of the distances between the mapping positions of the No. 3 first mapping mosaic unit and the No. 3 second mapping mosaic unit to obtain the overall matching error; when the No. 1-3 first mapping mosaic unit and No. 2-4 No. When the two mapping mosaic units are matched respectively, the square of the distance between the mapping positions of the No. 1 first mapping mosaic unit and the No. 2 second mapping mosaic unit and the No. 2 first mapping mosaic unit and No. 3 second mapping The sum of the square of the distance between the mapping positions of the mosaic constituent units and the square of the distance between the mapping positions of the No. 3 first mapping mosaic constituent unit and the No. 4 second mapping mosaic constituent unit is added to obtain the overall matching error. The matching relationship when the overall matching error is found to be the smallest is the determined matching relationship between the first mapping and splicing component unit and the second mapping and splicing component unit.
进一步地,还可以对标记的多个匹配散斑对进行检查,例如可以计算 各个匹配散斑对之间的相对距离数值。示例性而非限制性地,可以计算多个相对距离数值的平均值M和方差μ,并设置相对距离的阈值例如M±μ,对于超出该阈值范围的散斑对进行剔除。通过上述检查操作,可以对匹配误差进行检查,以获得更准确的匹配结果。示例性地,步骤S450可以包括以下步骤S451、S452和S453:步骤S451,基于匹配关系,确定第一待拼接数据和第二待拼接数据之间的位置对应关系;步骤S452,基于位置对应关系,对第一待拼接数据和第二待拼接数据中的一者进行拼接变换,以获得变换待拼接数据;步骤S453,对第一待拼接数据和第二待拼接数据中未进行变换的未变换待拼接数据和变换待拼接数据进行拼接,以获得整体数据。Further, multiple matching speckle pairs of markers can also be checked, for example, one can calculate The relative distance value between each matching speckle pair. By way of example and not limitation, the average value M and variance μ of multiple relative distance values may be calculated, and a threshold value of the relative distance, such as M±μ, may be set, and speckle pairs exceeding the range of the threshold value may be eliminated. Through the above checking operation, the matching error can be checked to obtain a more accurate matching result. Exemplarily, step S450 may include the following steps S451, S452, and S453: step S451, based on the matching relationship, determine the position correspondence between the first data to be spliced and the second data to be spliced; step S452, based on the position correspondence, One of the first data to be spliced and the second data to be spliced is spliced and transformed to obtain transformed data to be spliced; step S453, the untransformed untransformed data to be spliced in the first data to be spliced and the second data to be spliced is not transformed. Splicing the data and transforming the data to be spliced is spliced to obtain the overall data.
下面以第一待拼接数据和第二待拼接数据为展开图像为例进行具体说明。The following takes the first data to be spliced and the second data to be spliced as expanded images as an example for specific description.
示例性地,根据步骤S440可以得到多个散斑对以及每个散斑对分别在第一待拼接数据和第二待拼接数据上的位置坐标。散斑对例如D1*D2,其中D1(2,3),D2(2.1,3);E1*E2,其中E1(3,4),D2(3.2,4);F1*F2,其中F1(3,5),F2(3.2,5)。示例性地,可以根据上述三个散斑对或者更多散斑对的坐标关系构建关于第一待拼接数据与第二待拼接数据的变换函数(例如通过TPS薄板样条插值的方法)。拼接变换主要是使得第一待拼接数据和第二待拼接数据中的一者在变换后至少部分区域得到诸如缩放的调整,以与另一待拼接数据中的对应区域保持大小一致或基本一致。Exemplarily, according to step S440, a plurality of speckle pairs and position coordinates of each speckle pair on the first data to be spliced and the second data to be spliced respectively can be obtained. Speckle pairs such as D1*D2, where D1(2,3), D2(2.1,3); E1*E2, where E1(3,4), D2(3.2,4); F1*F2, where F1(3 ,5), F2(3.2,5). Exemplarily, a transformation function about the first data to be spliced and the second data to be spliced can be constructed according to the coordinate relationship of the above-mentioned three speckle pairs or more speckle pairs (for example, by TPS thin-plate spline interpolation method). The splicing transformation mainly makes one of the first data to be spliced and the second data to be spliced be adjusted such as scaling after transformation, so as to keep the same size or substantially the same size as the corresponding area in the other data to be spliced.
示例性地,可以将第一待拼接数据或第二待拼接数据的各像素点根据上述变换函数进行变换,得到变换待拼接数据。下面假设将第一待拼接数据进行拼接变换。容易理解,变换待拼接数据与第二待拼接数据,在位置对应的区域处大小是基本吻合的。在此基础上将二者进行拼接融合,可以快速得到误差较小的整体数据。在另一示例中,也可以首先对第二待拼接数据进行拼接变换,其原理与上述方案相同,用户可以根据需要进行设定。Exemplarily, each pixel point of the first data to be spliced or the second data to be spliced can be transformed according to the above transformation function to obtain transformed data to be spliced. In the following, it is assumed that the first data to be spliced is spliced and transformed. It is easy to understand that the converted data to be spliced and the second data to be spliced are basically the same in size at the area corresponding to the position. On this basis, the splicing and fusion of the two can quickly obtain the overall data with less error. In another example, the second data to be spliced may also first be spliced and transformed, the principle of which is the same as that of the above solution, and the user may set it as required.
第一待拼接数据或第二待拼接数据为三维目标模型的拼接示例与上述方法类似,本领域普通技术人员可以理解,在此不再赘述。The splicing example in which the first data to be spliced or the second data to be spliced is a three-dimensional object model is similar to the above method, which can be understood by those skilled in the art and will not be repeated here.
示例性地,第一变换关系和第二变换关系为展开变换关系,第一待拼接数据为第一展开图像,第二待拼接数据为第二展开图像;未变换待拼接数据为未变换展开图像;变换待拼接数据为变换展开图像;对第一待拼接数据和第二待拼接数据中未进行变换的未变换待拼接数据和变换待拼接数据进行拼接,以获得整体数据,包括:基于未变换展开图像中位于公共图像区域内的像素值和/或变换展开图像中位于公共图像区域内的像素值,确定整体数据的位于公共图像区域内的像素值;基于第一展开图像中位于非公共图像区域内的像素值,确定整体数据的位于第一非公共图像区域内的像素值;基于第二展开图像中位于非公共图像区域内的像素值,确定整体 数据的位于第二非公共图像区域内的像素值。其中,公共图像区域是第一映射拼接组成单元和第二映射拼接组成单元中相互匹配的映射拼接组成单元所在的区域,第一非公共图像区域为未匹配有第二映射拼接组成单元的第一映射拼接组成单元所在的区域;第二非公共图像区域为未匹配有第一映射拼接组成单元的第二映射拼接组成单元所在的区域。Exemplarily, the first transformation relationship and the second transformation relationship are expansion transformation relations, the first data to be spliced is the first expansion image, the second data to be spliced is the second expansion image; the untransformed data to be spliced is the untransformed expansion image ; Transform the data to be spliced into a transformed expanded image; splicing the untransformed untransformed data to be spliced and the converted data to be spliced in the first data to be spliced and the second data to be spliced to obtain the overall data, including: based on untransformed Determining pixel values in the common image area of the overall data based on pixel values in the common image area in the expanded image and/or transforming pixel values in the common image area in the expanded image; based on the pixel values in the common image area in the first expanded image Based on the pixel values in the area, determine the pixel values of the overall data located in the first non-common image area; based on the pixel values located in the non-public image area in the second expanded image, determine the overall The pixel values of the data that lie within the second non-common image area. Wherein, the common image area is the area where the matching mapping and stitching constituent units of the first mapping stitching composition unit and the second mapping stitching composition unit are located, and the first non-common image region is the first mapping stitching composition unit not matched with the second mapping stitching composition unit. The area where the mapping and stitching constituent units are located; the second non-common image area is the area where the second mapping and stitching constituent units that are not matched with the first mapping and stitching constituent units are located.
示例性地,参考图13,示出对两个图像进行拼接的示意图。如图13所示,可以首先对第一展开图像进行变换得到变换展开图像1310。可以理解,由于第一非公共图像区域1330完全由第一展开图像变换而来,因此,该区域的像素值可以等于第一展开图像的对应像素值;而第二非公共图像区域1340是第二展开图像1320的一部分,其中不包括第一展开图像的任何信息,因此该区域的像素值可以等于第二展开图像的相应像素值。而公共图像区域1350可以是变换展开图像1310和第二展开图像1320的重叠区域,因此该区域的像素值可以采用二者中任一者的像素值,也可以根据两者来共同确定,例如求平均值的方式。For example, refer to FIG. 13 , which shows a schematic diagram of stitching two images. As shown in FIG. 13 , the first expanded image may be transformed first to obtain a transformed expanded image 1310 . It can be understood that since the first non-public image area 1330 is completely transformed from the first expanded image, the pixel value of this area can be equal to the corresponding pixel value of the first expanded image; and the second non-public image area 1340 is the second A part of the expanded image 1320 does not include any information of the first expanded image, so the pixel value of this area may be equal to the corresponding pixel value of the second expanded image. The common image area 1350 can be the overlapping area of the transformed expanded image 1310 and the second expanded image 1320, so the pixel value of this area can adopt the pixel value of either of the two, or can be determined based on both, for example, average way.
上述确定整体数据的像素值的方法简单易操作,并且计算量小,直接在图像处理模型中加载相应的逻辑运算即可实现。The above-mentioned method for determining the pixel value of the overall data is simple and easy to operate, and has a small amount of calculation, which can be realized by directly loading corresponding logic operations in the image processing model.
示例性地,整体数据包括公共区域、第一非公共区域和第二非公共区域,公共区域为第一映射拼接组成单元和第二映射拼接组成单元中相互匹配的映射拼接组成单元所在的区域,第一非公共区域为未匹配有第二映射拼接组成单元的第一映射拼接组成单元所在的区域,第二非公共区域为未匹配有第一映射拼接组成单元的第二映射拼接组成单元所在的区域;步骤S460包括:将整体数据中锚点沿展公共坐标轴展开,得到各锚点在整体展开图像中的像素坐标;对于位于公共区域的锚点,根据第一照明光图像中与该锚点对应的像素点的像素值和/或第二照明光图像中与该锚点对应的像素点的像素值,确定整体展开图像中该锚点的像素值;对于位于第一非公共区域的锚点,根据第一照明光图像中该锚点对应的像素点的像素值,确定整体展开图像中该锚点的像素值;对于位于第二非公共区域的锚点,根据第二照明光图像中该锚点对应的像素点的像素值,确定整体展开图像中该锚点的像素值。Exemplarily, the overall data includes a common area, a first non-common area and a second non-common area, and the common area is the area where the matching mapping and mosaic constituent units of the first mapping mosaic constituent unit and the second mapping mosaic constituent unit are located, The first non-common area is the area where the first mapping and splicing constituent units that are not matched with the second mapping splicing constituent unit are located, and the second non-common area is the area where the second mapping splicing constituent units that are not matched with the first mapping splicing constituent unit are located region; step S460 includes: unfolding the anchor points in the overall data along the common coordinate axis to obtain the pixel coordinates of each anchor point in the overall unfolded image; The pixel value of the pixel point corresponding to the point and/or the pixel value of the pixel point corresponding to the anchor point in the second illumination light image determine the pixel value of the anchor point in the overall expanded image; for the anchor point located in the first non-public area point, according to the pixel value of the pixel point corresponding to the anchor point in the first illumination light image, determine the pixel value of the anchor point in the overall expanded image; for the anchor point located in the second non-common area, according to the The pixel value of the pixel corresponding to the anchor point determines the pixel value of the anchor point in the overall expanded image.
锚点是整体数据上的任意一个点。可以理解,在本实施例中,整体数据为通过对第一目标模型和第二目标模型进行拼接获得的整体目标模型。整体目标模型为三维的模型,其可以由点云构成。点云上的任意一个点可以视为是一个锚点。示例性而非限制性地,每个锚点在三维空间中的坐标是确定的,而像素值可以是未知的。对于第一待拼接数据和第二待拼接数据为三维目标模型的示例中,还可以在步骤S460中,将整体数据沿公共轴展开的过程中,以锚点为单位进行展开,使得每个锚点与照明光图像中的每个像素点一一对应,由此,可以将照明光图像中每个像素点的像素值一 一映射至整体展开图像中。并且,与前述拼接展开图像方案中所采用的确定整体数据中各像素的像素值的方法类似地,可以划分公共区域和非公共区域来进行像素值的映射。对于非公共区域的锚点,可以直接用该非公共区域所属的照明光图像的对应像素值来确定锚点的像素值。而对于公共区域的锚点,则可以选择任一照明光图像或结合两个照明光图像上的对应像素值来确定锚点的像素值,进而得到整体展开图像中各个锚点的像素值。本领域技术人员可以理解该方法,在此不再赘述。An anchor point is any point on the overall data. It can be understood that, in this embodiment, the overall data is an overall object model obtained by splicing the first object model and the second object model. The overall target model is a three-dimensional model, which may be composed of point clouds. Any point on the point cloud can be regarded as an anchor point. For example and not limitation, the coordinates of each anchor point in the three-dimensional space are determined, but the pixel value may be unknown. For the example in which the first data to be spliced and the second data to be spliced are three-dimensional target models, in step S460, during the process of unfolding the overall data along the common axis, the unfolding is performed in units of anchor points, so that each anchor Points are in one-to-one correspondence with each pixel in the illumination light image, thus, the pixel value of each pixel in the illumination light image can be One is mapped to the overall unfolded image. In addition, similar to the method for determining the pixel value of each pixel in the overall data adopted in the aforementioned mosaic expanded image solution, the common area and the non-common area can be divided to perform pixel value mapping. For an anchor point in a non-public area, the pixel value of the anchor point can be determined directly by using the corresponding pixel value of the illumination light image to which the non-public area belongs. As for the anchor points in the common area, you can select any illumination light image or combine the corresponding pixel values on two illumination light images to determine the pixel value of the anchor point, and then obtain the pixel values of each anchor point in the overall expanded image. Those skilled in the art can understand this method, and will not repeat it here.
本申请实施例提供一种手纹采集系统。该系统在图像采集装置采集手纹图像时,可以采用蓝色光源和绿色光源一起为图像采集装置补光(或说照明)。通过蓝光补光和通过绿光补光对不同皮肤状态下的手纹采集来说具有各自的优缺点,同时采用这两种光源补光可以扩大手纹采集系统对不同皮肤状态的适用性,从而有利于兼顾多种人群的采集需求。An embodiment of the present application provides a handprint collection system. The system can use the blue light source and the green light source together to supplement light (or illuminate) for the image acquisition device when the image acquisition device collects the handprint image. Supplementing light with blue light and supplementing light with green light have their own advantages and disadvantages for handprint collection under different skin conditions. At the same time, using these two light sources to supplement light can expand the applicability of the handprint collection system to different skin conditions, thereby It is conducive to taking into account the collection needs of various groups of people.
根据本申请一个方面,提供一种手纹采集系统。可以返回参见图2理解本实施例中的手纹采集系统。According to one aspect of the present application, a handprint collection system is provided. You can refer back to FIG. 2 to understand the handprint collection system in this embodiment.
如图2所示,手纹采集系统200可以包括一个或多个图像采集装置、照明系统和处理装置240。照明系统包括一个或多个蓝色光源和一个或多个绿色光源,每个光源用于朝向手纹采集区300发光。照明系统包含的蓝色光源组成蓝色光源组,照明系统包含的绿色光源组成绿色光源组。手纹采集区300用于放置用户的手的部分或全部,例如手指400。一个或多个图像采集装置用于在照明系统向手纹采集区300发光的同时采集手纹采集区300的图像以获得手纹图像。处理装置240用于对一个或多个图像采集装置采集的手纹图像进行处理。需注意,本文描述的照明系统所包括的光源可以是任意类型的不用于编码的光源,包括但不限于以下一种或多种:点光源、线光源、面光源等。点光源例如可以是多个点光源组成的灯带。示例性地,照明系统中的至少部分光源和一个或多个图像采集装置位于手纹采集区的同一侧,例如位于手纹采集区的下方。与一个或多个图像采集装置位于手纹采集区的同一侧的光源可以称为主光源。示例性而非限制性地,除主光源之外,照明系统还可以包括位于手纹采集区附近的辅助光源,以从距离手纹更近的位置照亮手纹。与主光源相比,辅助光源距离手纹采集区的中心更近。例如,主光源可以设置在手纹采集区的下方,而辅助光源的重心(可以理解为辅助光源中的每个光源各自的重心)可以与手纹采集区的中心位于或大致位于同一水平面上。辅助光源中的至少部分光源可以位于手纹采集区两侧预定位置(两侧中的第一侧对应有第一预定位置,第二侧对应有第二预定位置)处,和/或至少部分光源可以位于指尖位置附近预定位置(可以称为第三预定位置)处。位于手纹采集区任一侧预定位置处的光源可以朝向手纹采集区的该侧发光。位于指尖位置附近的光源可以朝向指尖位置发光。指尖位置是指预计当用户将手指或手掌放置在手纹采集区时用户的指尖应当落在的位置。可以理解的是,辅助光源的位置以 照亮手部且不影响手纹采集系统成像(例如辅助光源不会进入图像采集装置的成像范围)为目标进行设置。辅助光源也可以包含一个或多个蓝色光源和一个或多个绿色光源。As shown in FIG. 2 , the handprint collection system 200 may include one or more image collection devices, an illumination system and a processing device 240 . The lighting system includes one or more blue light sources and one or more green light sources, and each light source is used to emit light toward the handprint collection area 300 . The blue light sources included in the lighting system form a blue light source group, and the green light sources included in the lighting system form a green light source group. The handprint collection area 300 is used to place part or all of the user's hand, such as the fingers 400 . One or more image acquisition devices are used to collect images of the handprint collection area 300 while the illumination system emits light to the handprint collection area 300 to obtain a handprint image. The processing device 240 is used for processing the handprint images collected by one or more image collection devices. It should be noted that the light source included in the lighting system described herein may be any type of light source not used for encoding, including but not limited to one or more of the following: point light source, line light source, surface light source, etc. The point light source may be, for example, a light strip composed of multiple point light sources. Exemplarily, at least part of the light sources in the lighting system and one or more image capture devices are located on the same side of the handprint collection area, for example, below the handprint collection area. The light source located on the same side of the handprint collection area as one or more image collection devices may be called the main light source. As an example and not limitation, in addition to the main light source, the lighting system may also include an auxiliary light source located near the handprint collection area, so as to illuminate the handprint from a position closer to the handprint. Compared with the main light source, the auxiliary light source is closer to the center of the handprint collection area. For example, the main light source can be arranged below the handprint collection area, and the center of gravity of the auxiliary light source (which can be understood as the respective center of gravity of each light source in the auxiliary light source) can be located or approximately on the same level as the center of the handprint collection area. At least part of the light sources in the auxiliary light sources can be located at predetermined positions on both sides of the handprint collection area (the first side of the two sides corresponds to the first predetermined position, and the second side corresponds to the second predetermined position), and/or at least part of the light sources It may be located at a predetermined position near the fingertip (which may be referred to as a third predetermined position). A light source located at a predetermined position on either side of the handprint collection area can emit light toward the side of the handprint collection area. A light source located near the fingertip location may emit light toward the fingertip location. The fingertip position refers to the expected position where the user's fingertip should fall when the user places the finger or palm on the handprint collection area. It is understandable that the position of the auxiliary light source is The goal is to illuminate the hand without affecting the imaging of the handprint acquisition system (for example, the auxiliary light source will not enter the imaging range of the image acquisition device). The auxiliary light sources may also contain one or more blue light sources and one or more green light sources.
示例性地,本申请实施例的手纹采集系统可以是非接触式的,也即手纹采集区为一空间,用户将手的部分或全部放置在手纹采集区即可进行采集,不存在与手纹接触的实体。当然,可选地,手纹采集系统也可以是接触式的,例如运行于智能设备上的通过手指与屏幕接触来实现手纹采集的手纹采集系统。Exemplarily, the handprint collection system of the embodiment of the present application can be non-contact, that is, the handprint collection area is a space, and the user can collect part or all of the hand in the handprint collection area, and there is no contact with the handprint collection area. The entity touched by the handprint. Of course, optionally, the handprint collection system can also be contact type, for example, a handprint collection system that realizes handprint collection by touching a finger with a screen running on a smart device.
图2示出三个图像采集装置212、214和216。此外,图2还示出照明系统,其包括至少一个蓝色光源和至少一个绿色光源。FIG. 2 shows three image acquisition devices 212 , 214 and 216 . Furthermore, Fig. 2 also shows a lighting system comprising at least one blue light source and at least one green light source.
示例性而非限制性地,图2所示的照明系统可以包括与三个图像采集装置一一对应的三个光源集合,每个光源集合均包括至少一个蓝色光源和至少一个绿色光源,用于在其对应的图像采集装置进行图像采集时进行照明。Exemplarily but not limitatively, the lighting system shown in FIG. 2 may include three light source sets corresponding to the three image acquisition devices, each light source set includes at least one blue light source and at least one green light source. Illumination is performed when the corresponding image acquisition device performs image acquisition.
示例性的,根据光源集合中的光源与光源集合对应的图像采集装置的相对位置关系的不同,可以将光源集合进一步划分为光源子集合。例如,光源集合中光源分布于图像采集装置两侧,左侧光源作为一个光源子集合,右侧光源作为一个光源子集合。例如,图2中,第一个光源集合包括光源子集合222和222',第二个光源集合包括光源子集合224和224',第三个光源集合包括光源子集合226和226'。Exemplarily, according to the difference in the relative positional relationship between the light sources in the light source set and the image acquisition device corresponding to the light source set, the light source set may be further divided into light source sub-sets. For example, the light sources in the light source set are distributed on both sides of the image acquisition device, the left light source is used as a light source sub-set, and the right light source is used as a light source sub-set. For example, in FIG. 2, a first set of light sources includes light source subsets 222 and 222', a second light source set includes light source subsets 224 and 224', and a third light source set includes light source subsets 226 and 226'.
需注意,图2所示的手纹采集系统200的结构仅是示例而非对本申请的限制,根据本申请实施例的手纹采集系统200并不局限于图2所示的情况。例如,图像采集装置的数目可以是多于或少于三个。此外,图2所示的每个图像采集装置周围的光源集合所包括的光源子集合的数目以及光源子集合的位置也可以根据需要改变。例如,任一光源集合也可以包括单个光源子集合(例如第一光源集合可以仅包括光源子集合222)或多于两个光源子集合。It should be noted that the structure of the handprint collection system 200 shown in FIG. 2 is only an example and not a limitation to the application, and the handprint collection system 200 according to the embodiment of the application is not limited to the situation shown in FIG. 2 . For example, the number of image capture devices may be more or less than three. In addition, the number of light source subsets included in the light source set around each image acquisition device shown in FIG. 2 and the positions of the light source subsets can also be changed as required. For example, any set of light sources may also include a single subset of light sources (eg, the first set of light sources may only include subset 222 of light sources) or more than two subsets of light sources.
示例性地,照明系统包括至少一个光源对,每个光源对包括一个蓝色光源和一个绿色光源。本示例是比较可取的光源布置方式,这种方式便于使每个配对的蓝色光源与绿色光源尽量离得近一些,使得图像的蓝色通道和绿色通道的采集条件尽量趋近一致,这有助于提高后续基于两个通道获得的手纹图像的准确度。Exemplarily, the lighting system includes at least one light source pair, and each light source pair includes a blue light source and a green light source. This example is a preferable way to arrange the light sources. This way makes each paired blue light source and green light source as close as possible, so that the acquisition conditions of the blue channel and the green channel of the image are as close as possible to the same. It helps to improve the accuracy of subsequent handprint images obtained based on the two channels.
示例性地,在照明系统包括与一个或多个图像采集装置一一对应的一个或多个光源集合的实施例中,每个光源集合可以包括至少一个光源对,每个光源对包括一个蓝色光源和一个绿色光源。这样,便于使得每个图像采集装置在采集图像时,自身所对应的蓝色通道和绿色通道的成像条件尽量趋近一致。例如,图2中任一光源子集合可以包括一个光源对。 Exemplarily, in an embodiment where the lighting system includes one or more light source sets corresponding to one or more image acquisition devices, each light source set may include at least one light source pair, and each light source pair includes a blue light source and a green light source. In this way, it is convenient to make the imaging conditions of the blue channel and the green channel corresponding to each image acquisition device as close as possible to the same when acquiring images. For example, any subset of light sources in FIG. 2 may include a pair of light sources.
照明系统包括至少一个蓝色光源和至少一个绿色光源,在此基础上,照明系统中的光源的数目可以根据需要设定为任何合适的数目。如果按照颜色分类,照明系统中包括的光源可以分为蓝色光源组和绿色光源组。蓝色光源组中包括的蓝色光源的数目与绿色光源组中包括的绿色光源的数目均可以是任意的,且二者可以一致,也可以不一致。比较可取的是,蓝色光源组中包括的蓝色光源的数目与绿色光源组中包括的绿色光源的数目一致。The lighting system includes at least one blue light source and at least one green light source. On this basis, the number of light sources in the lighting system can be set to any suitable number as required. If classified according to color, the light sources included in the lighting system can be divided into a blue light source group and a green light source group. The number of blue light sources included in the blue light source group and the number of green light sources included in the green light source group can be arbitrary, and they can be consistent or inconsistent. Preferably, the number of blue light sources included in the blue light source group is consistent with the number of green light sources included in the green light source group.
示例性地,手纹采集系统200还可以包括壳体,一个或多个图像采集装置以及照明系统中至少部分光源可以设置于壳体内部。示例性地,壳体上可以设置有窗口,窗口上设置有透光板。手纹采集区可以位于该透光板上方,用户将手指放在手纹采集区内,下方的照明系统就可以透过透光板将光照射到手指上。手纹采集区下方的图像采集装置也可以透过透光板采集到手指的指纹图像。当然,手纹采集区位于透光板上方的实施例仅是示例,透光板是可选的,且手纹采集区相对图像采集装置和照明系统中至少部分光源的位置也是可以改变的,只要手纹采集区位于图像采集装置的成像范围内以及照明系统的照射范围内即可。在本文的描述中,任一图像采集装置的成像范围是指以该图像采集装置的镜头光心为顶点、以图像采集装置的视角为锥角的圆锥体所覆盖的范围。需要说明的是,透光板可以安装在壳体内设定高度,以尽量避免照明光源经透光板所成的虚像进入图像采集装置的成像范围。可以理解的是,照明系统还可以包括位于手纹采集区附近的光源,也就是说照明系统还可以包括位于透光板上方的辅助光源,从而在距离手部更近的位置对手部进行照明。辅助光源同样可以既包括蓝色光源、也包括绿色光源。辅助光源可以包括点光源、线光源、面光源中的一种或多种,可以位于处于手纹采集区中的手部附近,例如辅助光源包括手纹采集区两侧由点光源组成的灯带和指尖位置附近的点光源。Exemplarily, the handprint collection system 200 may further include a casing, and at least part of the light sources in the one or more image collection devices and the lighting system may be disposed inside the casing. Exemplarily, a window may be provided on the casing, and a light-transmitting plate may be provided on the window. The handprint collection area can be located above the light-transmitting plate, and the user puts a finger in the handprint collection area, and the lighting system below can shine light on the finger through the light-transmitting plate. The image acquisition device below the handprint acquisition area can also acquire fingerprint images of fingers through the light-transmitting plate. Of course, the embodiment in which the handprint collection area is located above the light-transmitting plate is only an example, the light-transmitting plate is optional, and the position of the handprint collection area relative to at least part of the light sources in the image collection device and lighting system can also be changed, as long as The handprint collection area only needs to be located within the imaging range of the image collection device and within the illumination range of the lighting system. In the description herein, the imaging range of any image acquisition device refers to the range covered by a cone with the optical center of the lens of the image acquisition device as the vertex and the viewing angle of the image acquisition device as the cone angle. It should be noted that the light-transmitting plate can be installed in the casing at a set height, so as to prevent the virtual image formed by the illumination source through the light-transmitting plate from entering the imaging range of the image acquisition device. It can be understood that the lighting system may also include a light source located near the handprint collection area, that is to say, the lighting system may also include an auxiliary light source located above the light-transmitting plate, so as to illuminate the hand at a position closer to the hand. The auxiliary light source can also include both blue light source and green light source. The auxiliary light source can include one or more of point light source, line light source, and surface light source, and can be located near the hand in the handprint collection area. For example, the auxiliary light source includes light strips composed of point light sources on both sides of the handprint collection area. and a point light near the fingertip location.
用户将手指或手掌放置在手纹采集区之后,图像采集装置就可以采集到用户的指纹或掌纹图像。手纹采集系统可以仅用于采集一种类型的手部纹理,也可以用于采集多种类型的手部纹理。例如,手纹采集系统可仅用于采集指尖区域的指纹(包括指肚正面和侧面的指纹),还可同时用于采集指尖区域的指纹和手掌的掌纹,还可同时用于采集指尖区域的指纹、手指除指尖所在指节外其他指节上的纹线和手掌的掌纹。多种类型的手部纹理可共用同一手纹采集区,或者使用不同的手纹采集区,不同类型的手部纹理的手纹采集区域可部分或全部重叠。After the user places the finger or palm in the handprint collection area, the image collection device can collect the user's fingerprint or palmprint image. The handprint collection system can be used to collect only one type of hand texture, or it can be used to collect multiple types of hand textures. For example, the handprint collection system can only be used to collect fingerprints in the fingertip area (including fingerprints on the front and side of the finger pad), and can also be used to collect fingerprints in the fingertip area and palm prints at the same time, and can also be used to collect Fingerprints in the fingertip area, lines on the knuckles of the fingers except the knuckles where the fingertips are located, and palm prints on the palm. Multiple types of hand textures can share the same handprint collection area, or use different handprint collection areas, and the handprint collection areas of different types of hand textures can partially or completely overlap.
手纹采集系统200包括的图像采集装置的数目可以根据需要设定为任何合适的数目。例如,图像采集装置的数目可以为1个、2个、3个或6个等等。The number of image capture devices included in the handprint capture system 200 can be set to any appropriate number as required. For example, the number of image acquisition devices may be 1, 2, 3 or 6 and so on.
在一个实施例中,手纹采集系统200可以包括单个图像采集装置。这 种情况下,可选地,可以通过图像采集装置采集单个角度下的手纹图像。当然,可选地,在这种情况下,也可以通过旋转和/或移动图像采集装置来采集多个角度下的手纹图像。当所采集的手纹为指纹时,采集单角度下的手纹图像的实现方案比较适合模拟通过平面捺印得到的指纹图像。In one embodiment, the handprint collection system 200 may include a single image collection device. this In this case, optionally, the image of the handprint at a single angle can be collected by an image collection device. Of course, optionally, in this case, it is also possible to collect handprint images from multiple angles by rotating and/or moving the image collection device. When the collected handprint is a fingerprint, the implementation scheme of collecting the handprint image at a single angle is more suitable for simulating the fingerprint image obtained by flat printing.
在另一个实施例中,手纹采集系统200可以包括多个图像采集装置。这种情况下,可选地,可以通过多个图像采集装置采集多个角度下的手纹图像。即,多个图像采集装置中的任意两个图像采集装置的光轴可以彼此成预设角度,该预设角度不等于0,以便于采集不同角度下的手纹图像。当所采集的手纹为指纹时,可将多个图像采集装置的角度/高度调整为不仅可以采集到指肚正面指纹图像,还可采集到指肚侧面指纹图像,采集多角度下的手纹图像的实现方案比较适合模拟通过滚动捺印得到的指纹图像。可以理解的是,多个图像采集装置可以设置为使其光轴与其所要采集的手纹区域大致垂直。例如,用于采集指肚正面指纹图像的图像采集装置的光轴与指肚正面区域垂直,用于采集指肚侧面指纹图像的图像采集装置的光轴与指肚侧面区域垂直。当手纹采集系统用于采集多种类型的手部纹理时,多个图像采集装置可以包括用于采集第一类型的手部纹理的图像采集装置和用于采集第二类型的手部纹理的图像采集装置,用于采集第一类型的手部纹理的图像采集装置和用于采集第二类型的手部纹理的图像采集装置的视场、焦距、放置角度、位置可以不同。可理解的是,相比于用于采集指纹的图像采集装置,用于采集掌纹的图像采集装置具有更大的视场。用于采集指纹或掌纹的图像采集装置的视场,可以指单个图像采集装置的视场,也可指多个图像采集装置组合的视场。对于同一图像采集装置,其可既用于采集第一类型的手部纹理,又用于采集第二类型的手部纹理。In another embodiment, the handprint collection system 200 may include multiple image collection devices. In this case, optionally, multiple image capture devices may be used to capture handprint images from multiple angles. That is, the optical axes of any two image capture devices among the plurality of image capture devices may form a preset angle with each other, and the preset angle is not equal to 0, so as to collect handprint images at different angles. When the collected handprint is a fingerprint, the angle/height of multiple image acquisition devices can be adjusted so that not only the frontal fingerprint image of the finger pad can be collected, but also the side fingerprint image of the finger pad can be collected, and the handprint image from multiple angles can be collected The implementation scheme of is more suitable for simulating the fingerprint image obtained by rolling stamping. It can be understood that multiple image acquisition devices may be arranged such that their optical axes are approximately perpendicular to the handprint area to be acquired. For example, the optical axis of the image acquisition device used to collect fingerprint images on the front of the finger pad is perpendicular to the front area of the finger pad, and the optical axis of the image acquisition device used to collect fingerprint images on the side of the finger pad is perpendicular to the side area of the finger pad. When the handprint acquisition system is used to acquire multiple types of hand textures, the plurality of image acquisition devices may include an image acquisition device for acquiring a first type of hand texture and an image acquisition device for acquiring a second type of hand texture. The field of view, focal length, placement angle, and position of the image acquisition device for acquiring the first type of hand texture and the image acquisition device for acquiring the second type of hand texture may be different. It can be understood that, compared with the image collection device used for collecting fingerprints, the image collection device for collecting palmprints has a larger field of view. The field of view of an image acquisition device used to collect fingerprints or palmprints may refer to the field of view of a single image acquisition device, or may refer to the field of view of a combination of multiple image acquisition devices. For the same image acquisition device, it can be used to acquire both the first type of hand texture and the second type of hand texture.
可理解的是,图像采集装置还可包含专用于预览的预览图像采集装置,预览图像采集装置可以具有比用于拍摄的图像采集装置更大的视野范围,例如单个预览图像采集装置视野覆盖整个掌纹,用于拍摄的多个图像采集装置共同的视野范围覆盖整个掌纹。多个图像采集装置可以一一对应地采集用户手指或手掌等目标部位的多个手纹图像,并且多个手纹图像可以对应于目标部位的不同部分。例如,图像采集装置为3个,由第一个图像采集装置采集的手纹图像可以主要对应于手指或手掌的左侧部分(即侧面区域,具体为左侧的侧面区域),由第二个图像采集装置采集的手纹图像可以主要对应于手指或手掌的中间部分(即正面区域),而由第三个图像采集装置采集的手纹图像可以主要对应于手指或手掌的右侧部分(即侧面区域,具体为右侧的侧面区域)。It can be understood that the image acquisition device may also include a preview image acquisition device dedicated to preview, and the preview image acquisition device may have a larger field of view than the image acquisition device used for shooting, for example, the field of view of a single preview image acquisition device covers the entire palm The common field of view of multiple image acquisition devices used for shooting covers the entire palmprint. Multiple image acquisition devices can capture multiple palmprint images of target parts such as fingers or palms of the user in one-to-one correspondence, and multiple palmprint images can correspond to different parts of the target site. For example, there are three image acquisition devices, the handprint image collected by the first image acquisition device can mainly correspond to the left part of the finger or palm (i.e. the side area, specifically the side area on the left side), and the handprint image collected by the second image acquisition device The palmprint image captured by the image capture device may mainly correspond to the middle part of the finger or palm (i.e. the frontal area), while the palmprint image captured by the third image capture device may mainly correspond to the right side part of the finger or palm (i.e. side area, specifically the side area on the right side).
图像采集装置的光轴朝向目标部位的手纹所在的一面,例如手指的指肚或手掌的掌心设置。例如,如果规定用户在采集手纹时,手指的指肚朝向地平面,则图像采集装置的光轴可以竖直朝上或向斜上方设置,以便于能够采集到用户手指的指纹。 The optical axis of the image acquisition device is set towards the side where the handprint of the target part is located, for example, the pad of the finger or the palm of the palm. For example, if it is stipulated that the pads of the fingers of the user face the ground plane when collecting the fingerprints of the user, the optical axis of the image collection device can be set vertically upward or obliquely upward so as to collect the fingerprint of the user's finger.
在图像采集装置的数目为多个的情况下,多个图像采集装置的重心所在的高度可以设置为一致或不一致。例如,在采集掌纹的情况下,多个图像采集装置的重心所在的高度可以一致。在采集手纹的情况下,多个图像采集装置的重心可以处于一个圆弧上。When there are multiple image capture devices, the heights of the centers of gravity of the multiple image capture devices may be set to be consistent or inconsistent. For example, in the case of collecting palm prints, the heights of the centers of gravity of multiple image capturing devices may be consistent. In the case of collecting handprints, the centers of gravity of multiple image capturing devices may lie on a circular arc.
处理装置240与一个或多个图像采集装置可通信地连接,该连接可以是有线或无线连接。处理装置240可以接收一个或多个图像采集装置各自采集的手纹图像,并对手纹图像进行处理。处理装置240可以对手纹图像进行任何需要的处理操作,包括但不限于以下一项或多项:从手纹图像中获取深度信息;对多个图像采集装置采集到的手纹图像进行拼接处理等等。The processing device 240 is communicatively connected to one or more image acquisition devices, which connection may be wired or wireless. The processing device 240 may receive handprint images collected by one or more image collection devices, and process the handprint images. The processing device 240 can perform any necessary processing operations on the handprint image, including but not limited to one or more of the following: obtaining depth information from the handprint image; performing splicing processing on the handprint images collected by multiple image acquisition devices, etc. wait.
示例性而非限制性地,处理装置240还可以与照明系统可通信地连接,该连接可以是有线或无线连接。在这种情况下,处理装置240可以控制照明系统中的各光源按照本文描述的各种发光方案发光。当然,可选地,也可以通过与处理装置240不同的其他控制装置来控制照明系统中的各光源发光。上述控制装置可以是手纹采集系统内的控制装置,也可以是独立于手纹采集系统的外部控制装置。By way of example and not limitation, the processing device 240 can also be communicatively connected with the lighting system, and the connection can be wired or wireless. In this case, the processing device 240 may control each light source in the lighting system to emit light according to various lighting schemes described herein. Of course, alternatively, other control devices different from the processing device 240 may also be used to control each light source in the lighting system to emit light. The above-mentioned control device may be a control device in the handprint collection system, or an external control device independent of the handprint collection system.
不同人的皮肤对光的反射、吸收和透射率不同,同一个人的手的部位(下面以手指或手掌为例)对不同波长的光的反射、吸收和透射率也是不同的。此外,同一个人的手指或手掌在干燥、湿润或有汗水等状态下,对光的反射、吸收和透射表现也是不同的。Different people's skin has different reflection, absorption and transmittance of light, and the same person's hand (finger or palm as an example) has different reflection, absorption and transmittance of light of different wavelengths. In addition, the reflection, absorption and transmission of light are also different when the fingers or palms of the same person are dry, wet or sweaty.
发明人发现:在采集手纹图像时,单纯用绿光补光的话,由于手指或手掌对绿光的吸收较强,一方面手指或手掌反射到图像采集装置的光线变弱使得图像对比度降低,另一方面手指或手掌吸收光线会产生荧光和磷光,使采集到的图像变得模糊;而单纯用蓝光补光的话,由于手指或手掌对蓝光的吸收较弱,手指或手掌反射到图像采集装置的光线较强,对浅皮肤的人来说容易过曝,此外手上过干、有死皮或者有汗水等液体的话则有较强的反射,容易造成对比度变差。然而,蓝光和绿光又都有各自的优势。例如,蓝光补光的情况下指纹或掌纹的纹脊和纹谷的对比度会提高,适用于浅指纹人群;而绿光补光的情况下过干或过湿的手指则拍摄得更清楚。The inventors found that when collecting handprint images, if only green light is used to supplement the light, since the green light is strongly absorbed by the fingers or palms, on the one hand, the light reflected from the fingers or palms to the image acquisition device becomes weaker, which reduces the contrast of the image. On the other hand, the absorption of light by fingers or palms will produce fluorescence and phosphorescence, which will blur the collected images; if only blue light is used to fill the light, due to the weak absorption of blue light by fingers or palms, the fingers or palms will reflect to the image acquisition device The light is strong, and it is easy to overexpose for people with light skin. In addition, if the hands are too dry, there is dead skin or liquid such as sweat, there will be strong reflections, which will easily cause poor contrast. However, both blue light and green light have their own advantages. For example, the contrast of the ridges and valleys of fingerprints or palm prints will be improved under the blue light fill light, which is suitable for people with light fingerprints; while the overdry or wet fingers can be photographed more clearly under the green light fill light.
因此,发明人选择同时用蓝绿两种颜色的光源补光,这样两种光可以互补,只要在一种颜色的光下能采集到清楚的图像,经过处理就可以比较容易地获得高质量的手纹图像。因此,这种在采集手纹时通过蓝绿两种颜色的光源补光的方案,可以扩大手纹采集系统对不同皮肤状态的适用性,从而有利于兼顾多种人群的采集需求。Therefore, the inventor chooses to supplement the light with blue and green light sources at the same time, so that the two lights can complement each other. As long as a clear image can be collected under one color of light, high-quality images can be obtained relatively easily after processing. Handprint image. Therefore, this scheme of supplementing light with blue and green light sources when collecting handprints can expand the applicability of the handprint collection system to different skin conditions, thereby helping to meet the collection needs of various groups of people.
为了避免蓝绿光相互影响,蓝绿光源的波长区分度可以尽量设置得大一些,即可以尽量选择波长短一些的蓝色光源(偏紫)和波长长一些的绿色光源(偏黄)。示例性而非限制性地,本文描述的蓝光可以是波长小于430nm的光,例如405nm的光,绿光可以是波长大于540nm的光,例如 560nm的光。In order to avoid the mutual influence of blue and green light, the wavelength discrimination of blue and green light sources can be set as large as possible, that is, blue light sources with shorter wavelengths (purple) and green light sources with longer wavelengths (yellow) can be selected as much as possible. By way of example and not limitation, the blue light described herein may be light with a wavelength of less than 430nm, such as light at 405nm, and the green light may be light with a wavelength greater than 540nm, such as 560nm light.
在一个具体实施例中,照明系统可以包括一个或多个光源集合,每个光源集合包括至少一个蓝色光源和至少一个绿色光源,一个或多个光源集合与一个或多个图像采集装置一一对应。In a specific embodiment, the lighting system may include one or more light source sets, each light source set includes at least one blue light source and at least one green light source, one or more light source sets are connected with one or more image acquisition devices one by one correspond.
在本实施例中,与任一图像采集装置相对应的光源集合负责为该图像采集装置补光,每次该图像采集装置采集图像时,可以仅与其对应的光源集合发光,其他光源集合不发光。为每个图像采集装置分配与之对应的补光光源,这方便实现对图像采集装置及光源的控制和管理,且方便避免采集图像时不同光源互相干扰。In this embodiment, the light source set corresponding to any image acquisition device is responsible for supplementing light for the image acquisition device, and each time the image acquisition device captures an image, only the light source set corresponding to it can emit light, and other light source sets do not emit light . Assigning a corresponding supplementary light source to each image acquisition device facilitates the control and management of the image acquisition device and the light source, and facilitates the avoidance of mutual interference of different light sources when acquiring images.
示例性而非限制性地,对于一个或多个光源集合中的任一光源集合,该光源集合中的光源发射光线的方向可以与在该光源集合中的光源处于开启状态时所采集的手纹区域大致垂直。如此,光线与手纹区域大致垂直,这有利于保证手纹图像中能够较准确地呈现手纹的纹谷和纹脊情况。在多个图像采集装置可以设置为使其光轴与其所要采集的手纹区域大致垂直时,各光源集合中的光源发射光线的方向与对应采集装置的光轴方向是大致平行的,这可以通过将光源集合中的光源与其对应的图像采集装置设置得足够近来实现。As an example and not limitation, for any light source set in one or more light source sets, the direction of the light emitted by the light source in the light source set may be the same as the handprint collected when the light source in the light source set is turned on. The area is roughly vertical. In this way, the light is approximately perpendicular to the handprint area, which is beneficial to ensure that the valleys and ridges of the handprint can be more accurately presented in the handprint image. When multiple image acquisition devices can be arranged so that their optical axes are roughly perpendicular to the handprint area to be collected, the direction in which the light sources in each light source set emit light is roughly parallel to the direction of the optical axis of the corresponding collection device, which can be achieved by This is achieved by arranging the light sources in the light source set and their corresponding image acquisition devices close enough.
继续参考图2,示出图像采集装置212、214和216各自的光轴2122、2142和2162。从图2可以看出,各光源集合中的光源发射光线的方向与对应采集装置的光轴方向都是平行的。需注意,本文描述的平行不一定要求平行对象彼此之间绝对平行,可以是大致平行。例如,平行可以是指平行对象(在本实施例中为光源发射光线的方向与对应图像采集装置的光轴方向或者同一平行光源所发射的任意两条光线)之间的夹角小于预设阈值。该阈值可以尽量设定得比较小,例如设定为10°等。With continued reference to FIG. 2 , optical axes 2122 , 2142 , and 2162 of image capture devices 212 , 214 , and 216 , respectively, are shown. It can be seen from FIG. 2 that the direction of the light emitted by the light sources in each light source set is parallel to the direction of the optical axis of the corresponding collection device. It should be noted that the parallel described herein does not necessarily require parallel objects to be absolutely parallel to each other, but may be approximately parallel. For example, parallel may mean that the angle between a parallel object (in this embodiment, the direction of the light emitted by the light source and the direction of the optical axis of the corresponding image acquisition device or any two light rays emitted by the same parallel light source) is smaller than a preset threshold . The threshold can be set as small as possible, for example, set to 10° or the like.
根据本申请实施例,照明系统包括至少一个光源对,每个光源对包括一个蓝色光源和一个绿色光源,同一光源对内的蓝色光源和绿色光源之间的距离小于预设距离阈值。According to an embodiment of the present application, the lighting system includes at least one light source pair, each light source pair includes a blue light source and a green light source, and the distance between the blue light source and the green light source in the same light source pair is smaller than a preset distance threshold.
进行图像采集时,同一光源对中的蓝色光源和绿色光源可同时开启或关闭,如此更有利于保证蓝色通道和绿色通道的成像条件趋近一致。蓝色通道和绿色通道的成像条件一致有助于提高后续图像融合时蓝色通道图像和绿色通道图像的融合效果。During image acquisition, the blue light source and the green light source in the same light source pair can be turned on or off at the same time, which is more conducive to ensuring that the imaging conditions of the blue channel and the green channel are closer to the same. The same imaging conditions of the blue channel and the green channel help to improve the fusion effect of the blue channel image and the green channel image in the subsequent image fusion.
但这种实施例仅是示例而非对本申请的限制,照明系统中的光源可以均为不成对的,或者,照明系统可以包括光源对和不成对的光源。However, this embodiment is only an example and not a limitation to the present application. The light sources in the lighting system may all be unpaired, or the lighting system may include light source pairs and unpaired light sources.
每个光源集合可以包括至少一个光源对。每个光源对可以包括一蓝色光源和一绿色光源。每个光源对中的蓝色光源和绿色光源可以离得尽量近一些以进一步保证蓝色通道和绿色通道的成像条件趋近一致。例如,返回参考图2,其中所示的任一光源子集合(例如222、222'、224、224'、226、 226')可以包括至少一个光源对,这样该光源子集合中的蓝色光源和绿色光源能够比较方便地设置得近一些。Each light source set may include at least one light source pair. Each pair of light sources may include a blue light source and a green light source. The blue light source and the green light source in each light source pair can be as close as possible to further ensure that the imaging conditions of the blue channel and the green channel are close to the same. For example, referring back to FIG. 2, any subset of light sources (eg, 222, 222', 224, 224', 226, 226') may include at least one pair of light sources, so that the blue light source and the green light source in the light source subset can be conveniently arranged closer.
预设距离阈值可以根据需要设定为任何合适的值,本申请不对此进行限制。The preset distance threshold may be set to any appropriate value as required, and this application is not limited thereto.
根据本申请实施例,照明系统位于一个或多个该图像采集装置的成像范围之外。在照明系统包括一个或多个光源集合的实施例中,与任一图像采集装置相对应的光源集合可以位于该图像采集装置的成像范围之外。According to an embodiment of the present application, the illumination system is located outside the imaging range of one or more image acquisition devices. In embodiments where the illumination system includes one or more sets of light sources, the set of light sources corresponding to any image capture device may be located outside the imaging range of that image capture device.
光源可以设置在图像采集装置周围任何合适的位置,只要不妨碍图像采集装置成像即可。与任一图像采集装置相对应的光源集合设置在该图像采集装置的成像范围之外,可以有效避免光源妨碍图像采集装置采集手纹图像。The light source can be arranged at any suitable position around the image acquisition device, as long as it does not hinder the imaging of the image acquisition device. The set of light sources corresponding to any image capture device is set outside the imaging range of the image capture device, which can effectively prevent the light source from hindering the image capture device from collecting handprint images.
根据本申请实施例,一个或多个图像采集装置包括第一图像采集装置、第二图像采集装置和第三图像采集装置,其中,第一图像采集装置的光轴与手纹采集区的朝向一个或多个图像采集装置的平面垂直,第二图像采集装置的光轴与第一图像采集装置的光轴成第一预设夹角,第三图像采集装置的光轴与第一图像采集装置的光轴成第二预设夹角。According to an embodiment of the present application, one or more image acquisition devices include a first image acquisition device, a second image acquisition device and a third image acquisition device, wherein the optical axis of the first image acquisition device is aligned with the direction of the handprint acquisition area Or the planes of multiple image acquisition devices are vertical, the optical axis of the second image acquisition device forms a first preset angle with the optical axis of the first image acquisition device, the optical axis of the third image acquisition device and the first image acquisition device The optical axis forms a second preset angle.
手纹采集区朝向一个或多个图像采集装置的平面可以是诸如与上述遮光板所在平面平行的面。在规定用户在采集手纹时,手指的指肚或手掌的掌心需朝向地平面的情况下,手纹采集区朝向一个或多个图像采集装置的平面可以与地平面平行。此时,第一图像采集装置的光轴可以是竖直朝上的,第二图像采集装置和第三图像采集装置的光轴可以是朝向斜上方的。The plane of the handprint collection area facing one or more image collection devices may be, for example, a plane parallel to the plane where the above-mentioned shading plate is located. When it is stipulated that the pads of the fingers or the palm of the palm of the user must face the ground plane when the user collects the fingerprints, the plane of the palmprint collection area facing one or more image capture devices can be parallel to the ground plane. At this time, the optical axis of the first image acquisition device may be vertically upward, and the optical axes of the second image acquisition device and the third image acquisition device may be obliquely upward.
第一预设夹角和第二预设夹角中任一者可以根据需要设定为任何合适的值,本申请不对此进行限制。第一预设夹角和第二预设夹角可以相同或不同。示例性地,第一预设夹角可以是0到45度范围内,第二预设夹角也可以是0到45度范围内。Any one of the first preset included angle and the second preset included angle may be set to any appropriate value as required, which is not limited in this application. The first preset included angle and the second preset included angle may be the same or different. Exemplarily, the first preset included angle may be in the range of 0 to 45 degrees, and the second preset included angle may also be in the range of 0 to 45 degrees.
根据本申请实施例,第一图像采集装置、第二图像采集装置和第三图像采集装置的光轴位于同一平面内。According to the embodiment of the present application, the optical axes of the first image acquisition device, the second image acquisition device and the third image acquisition device are located in the same plane.
三个图像采集装置的光轴位于同一平面内且互相非平行设置的方式布置简单,也有助于比较方便地采集到手指或手掌等目标部位的多个不同部分的图像。The arrangement of the optical axes of the three image acquisition devices in the same plane and arranged non-parallel to each other is simple, and also helps to more conveniently acquire images of multiple different parts of target parts such as fingers or palms.
根据本申请实施例,手纹采集系统还可以包括:结构光投射器,结构光投射器用于向手纹采集区发射结构光;处理装置可以通过以下方式对一个或多个图像采集装置采集的手纹图像进行处理:从一个或多个图像采集装置采集的手纹图像中分别提取结构光所对应通道的灰度图像,并基于提取的结构光图像确定手纹深度信息。According to the embodiment of the present application, the handprint collection system may further include: a structured light projector for emitting structured light to the handprint collection area; Handprint images are processed: the grayscale images of the channels corresponding to the structured light are extracted from the handprint images collected by one or more image acquisition devices, and the depth information of the handprints is determined based on the extracted structured light images.
图14示出根据本申请一个实施例的手纹采集系统200及相关的手指 400和手纹采集区300的示意图。图14示出手纹采集系统200还包括结构光投射器100。图2可以理解为手纹采集系统200的主视图,而图14可以理解为手纹采集系统200的左视图。需注意,图14仅示出单个图像采集装置214作为示例来进行描述,本领域技术人员可以理解其他图像采集装置在图14中的位置。Figure 14 shows a handprint collection system 200 and related fingers according to one embodiment of the present application 400 and a schematic diagram of the handprint collection area 300. FIG. 14 shows that the handprint collection system 200 also includes a structured light projector 100 . FIG. 2 can be understood as a front view of the handprint collection system 200 , and FIG. 14 can be understood as a left side view of the handprint collection system 200 . It should be noted that FIG. 14 only shows a single image acquisition device 214 as an example for description, and those skilled in the art can understand the positions of other image acquisition devices in FIG. 14 .
图14所示的手纹采集系统200的结构仅是示例而非对本申请的限制,根据本申请实施例的手纹采集系统200并不局限于图14所示的情况。例如,图14示出以特定横截面302为界,结构光投射器100位于更靠近手指根的一侧,此时结构光投射器100的头部(发射光线的部位)相对尾部(与头部相对的部位)离手指根的水平距离更远,其中,特定横截面302是手指400的横截面中与手纹采集区300的中轴线重叠的横截面。上述结构光投射器100的布置方式仅是示例,其也可以位于以特定横截面302为界的更远离手指根的一侧,即结构光投射器100的头部也可以相对尾部离手指根的水平距离更近。实际上,结构光投射器100可以根据需要设置在任何合适的位置,只要不妨碍图像采集装置成像且能够照射到手纹采集区即可。The structure of the handprint collection system 200 shown in FIG. 14 is only an example and not a limitation to the application, and the handprint collection system 200 according to the embodiment of the application is not limited to the situation shown in FIG. 14 . For example, FIG. 14 shows that the structured light projector 100 is located on the side closer to the root of the finger with a specific cross-section 302 as the boundary. The opposite part) is farther horizontally from the root of the finger, wherein the specific cross-section 302 is the cross-section of the cross-section of the finger 400 that overlaps with the central axis of the handprint collection area 300 . The arrangement of the above-mentioned structured light projector 100 is only an example, and it can also be located on the side farther away from the root of the finger bounded by the specific cross section 302, that is, the head of the structured light projector 100 can also be farther away from the root of the finger than the tail. The horizontal distance is closer. In fact, the structured light projector 100 can be arranged at any suitable position as required, as long as it does not hinder the imaging of the image acquisition device and can illuminate the handprint acquisition area.
示例性地而非限制性地,结构光投射器100的重心所在高度与一个或多个图像采集装置的综合重心或最低重心所在高度可以设置为基本一致。这样方便将手纹采集系统中的部件更紧凑地布置在一起,有利于减小设备的体积。示例性地而非限制性地,结构光投射器100的重心所在高度可以相比一个或多个图像采集装置的综合重心或最低重心所在高度更低,这样可以保证结构光投射器100的光线投射面积足够大,以尽量有效地覆盖目标部位所对应的采集区域。By way of example and not limitation, the height of the center of gravity of the structured light projector 100 and the height of the integrated center of gravity or the lowest center of gravity of one or more image acquisition devices may be set to be substantially the same. In this way, it is convenient to arrange the components in the handprint collection system more compactly, and it is beneficial to reduce the volume of the device. By way of example and not limitation, the height of the center of gravity of the structured light projector 100 can be lower than the height of the integrated center of gravity or the lowest center of gravity of one or more image acquisition devices, so that the light projection of the structured light projector 100 can be ensured. The area is large enough to cover the acquisition area corresponding to the target site as effectively as possible.
在采集指纹的情况下,目标部位较小,结构光投射器100的重心所在高度与一个或多个图像采集装置的综合重心或最低重心所在高度可以设置为基本一致。在采集掌纹的情况下,目标部位较大,结构光投射器100的重心所在高度可以相比一个或多个图像采集装置的综合重心或最低重心所在高度更低。In the case of collecting fingerprints, the target site is relatively small, and the height of the center of gravity of the structured light projector 100 can be set to be substantially the same as the height of the integrated center of gravity or the lowest center of gravity of one or more image acquisition devices. In the case of collecting palm prints, the target site is relatively large, and the height of the center of gravity of the structured light projector 100 may be lower than the height of the combined center of gravity or the lowest center of gravity of one or more image capture devices.
从任一手纹图像中提取出的结构光所对应通道的灰度图像为该手纹图像所对应的结构光图像。可以理解,结构光图像包含原手纹图像中各像素所对应的深度信息。因此,通过结构光投射器,可以进一步获取手指或手掌等目标部位的手纹深度信息。获取手纹深度信息,方便后续模拟捺印手纹图像。目前很多场合可能需要应用到通过捺印方式获取的指纹图像,因此加入结构光的方案可以将获取的手纹图像正确地展开为二维图像,从而在后续比对过程中与通过捺印方式获取的手纹图像兼容,扩大手纹采集系统的应用场景。The grayscale image of the channel corresponding to the structured light extracted from any handprint image is the structured light image corresponding to the handprint image. It can be understood that the structured light image includes depth information corresponding to each pixel in the original handprint image. Therefore, through the structured light projector, the depth information of the palm lines of target parts such as fingers or palms can be further obtained. Obtain the depth information of the handprint, which is convenient for subsequent simulation of the handprint image. At present, many occasions may need to be applied to fingerprint images obtained by stamping. Therefore, the scheme of adding structured light can correctly expand the acquired handprint image into a two-dimensional image, so that it can be compared with the fingerprint image obtained by stamping in the subsequent comparison process. Compatible with fingerprint images, expanding the application scenarios of the handprint collection system.
示例性地而非限制性地,由于补光用到了蓝绿两种颜色,因此为了减少对蓝绿通道的干扰,结构光可以尽量选择与蓝绿色相差较大的颜色,例 如红色等。示例性地,结构光投射器所发射的结构光可以是660nm波长的红光。照明光和结构光分别选蓝、绿、红三色,这样有利于后续方便地提取不同通道的图像。当然,结构光也可以不一定用红光,后续可以用图像处理的方式去除蓝绿通道中结构光的影响。As an example and not a limitation, since two colors of blue and green are used for supplementary light, in order to reduce the interference to the blue and green channel, the structured light can try to choose a color that is quite different from the blue and green, for example Such as red and so on. Exemplarily, the structured light emitted by the structured light projector may be red light with a wavelength of 660nm. Illumination light and structured light respectively choose blue, green, and red three colors, which is conducive to the subsequent convenient extraction of images of different channels. Of course, red light may not be used for structured light, and image processing can be used to remove the influence of structured light in the blue-green channel.
根据本申请实施例,结构光投射器发射光线的方向与一个或多个图像采集装置的光轴所在的平面成第三预设角度。According to the embodiment of the present application, the direction of the light emitted by the structured light projector and the plane where the optical axes of the one or more image acquisition devices are located form a third preset angle.
第三预设角度可以根据需要设定为任何合适的值,本申请不对此进行限制。第三预设角度可以与上述第一预设角度和第二预设角度均不相同,也可以与第一预设角度和/或第二预设角度相同。The third preset angle can be set to any suitable value as required, and this application is not limited thereto. The third preset angle may be different from the above-mentioned first preset angle and the second preset angle, or may be the same as the first preset angle and/or the second preset angle.
假设图14中所有图像采集装置的光轴处于同一平面内且该平面在图14所示的左视图中呈现为与光轴2142平行的直线。从图14中可以看出结构光投射器100发射光线的方向与图像采集装置的光轴所在的平面成一定角度。这种在图像采集装置的侧面设置结构光投射器的方式,方便使结构光投射器避开图像采集装置的成像范围,以避免妨碍图像采集装置成像。此外,这种在图像采集装置的侧面设置结构光投射器的方式,也方便将结构光投射器设置在与图像采集装置相近的高度,这有利于在手纹采集系统内部实现更紧凑的布局。It is assumed that the optical axes of all the image acquisition devices in FIG. 14 are in the same plane, and this plane appears as a straight line parallel to the optical axis 2142 in the left view shown in FIG. 14 . It can be seen from FIG. 14 that the direction of light emitted by the structured light projector 100 forms a certain angle with the plane where the optical axis of the image acquisition device is located. This method of arranging the structured light projector on the side of the image acquisition device facilitates the structured light projector to avoid the imaging range of the image acquisition device, so as to avoid hindering the imaging of the image acquisition device. In addition, this method of arranging the structured light projector on the side of the image acquisition device also facilitates the arrangement of the structured light projector at a height similar to that of the image acquisition device, which is conducive to achieving a more compact layout inside the handprint acquisition system.
根据本申请实施例,手纹采集系统200还可以包括壳体,一个或多个图像采集装置和照明系统设置在壳体内,壳体上面向手纹采集区的位置处设置有窗口,窗口上设置有透光板。According to the embodiment of the present application, the handprint collection system 200 may also include a casing, one or more image collection devices and lighting systems are arranged in the casing, a window is set on the casing facing the handprint collection area, and a window is set on the window. There are transparent panels.
上文描述了图像采集装置和照明系统设置在壳体内,并在壳体上设置透光板的实施例,此处不再赘述。透光板可以采用任何合适的材质实现,包括但不限于玻璃等。示例性地,该玻璃可以是带有增透镀膜的玻璃。透光板可以起到防水防尘的作用,帮助保护手纹采集系统中的主体结构,延长设备使用寿命。The above describes the embodiment in which the image acquisition device and the lighting system are arranged in the casing, and a light-transmitting plate is arranged on the casing, and details will not be repeated here. The light-transmitting plate can be realized by using any suitable material, including but not limited to glass and the like. Exemplarily, the glass may be glass with an anti-reflection coating. The light-transmitting board can play the role of waterproof and dustproof, help protect the main structure of the handprint collection system, and prolong the service life of the equipment.
根据本申请实施例,手纹采集系统200还可以包括限位部件,限位部件限定出手纹采集区,限位部件的近端设置有入口,以允许用户的手的部分或全部穿过入口伸入手纹采集区;限位部件的远端设置有止挡部,止挡部沿着伸入方向位于手纹采集区的预定位置处,以限制用户的手的部分或全部伸入到手纹采集区内的位置。According to the embodiment of the present application, the handprint collection system 200 may also include a limiting part, which defines a handprint collection area, and an inlet is provided at the proximal end of the limiting part to allow part or all of the user's hand to pass through the inlet. Enter the handprint collection area; the far end of the limiting component is provided with a stopper, and the stopper is located at a predetermined position in the handprint collection area along the extending direction, so as to limit part or all of the user's hand from extending into the handprint collection area location within.
限位部件的近端可以理解为在用户正常使用手纹采集系统时限位部件上距离用户的正面更近的一端,限位部件的远端可以理解为在用户正常使用手纹采集系统时限位部件上距离用户的正面更远的一端。The proximal end of the limiter can be understood as the end of the limiter that is closer to the front of the user when the user normally uses the handprint collection system, and the far end of the limiter can be understood as the limiter when the user normally uses the handprint collection system on the end that is farther from the front of the user.
图15示出根据本申请一个实施例的手纹采集系统的一部分的示意图。在图15中,示出了限位部件250,并示出了限位部件250的入口252和止挡部254。限位部件250的入口252至止挡部254之间的区域可以设定为手纹采集区300。用户的手指或手掌等目标部位可以通过入口252伸入手 纹采集区300中。止挡部254可以限制用户的手指或手掌等目标部位所能够达到的极限位置,避免用户将手指或手掌等目标部位伸入得太远。通过限位部件250,可以方便地指导用户将手指或手掌等目标部位放置在正确的位置,以保证目标部位上的比较适合检测的部分(例如手指最靠近指尖的第一节指节)可以落入手纹采集区。Fig. 15 shows a schematic diagram of a part of the handprint collection system according to one embodiment of the present application. In FIG. 15 , the limiting component 250 is shown, and the inlet 252 and the stop portion 254 of the limiting component 250 are shown. The area between the entrance 252 and the stopper 254 of the limiting component 250 can be set as the handprint collection area 300 . Target parts such as user's finger or palm can reach into hand through inlet 252 In the pattern collection area 300. The stopper 254 can limit the limit position that the user's finger or palm can reach, and prevent the user from extending the target part such as finger or palm too far. Through the limit part 250, it is convenient to guide the user to place the target part such as the finger or the palm in the correct position, so as to ensure that the part on the target part that is more suitable for detection (for example, the first knuckle of the finger closest to the fingertip) can Fall into the handprint collection area.
根据本申请实施例,手纹采集系统200还可以包括遮光部件,遮光部件与照明系统中至少部分光源分别位于手纹采集区的相对的两侧。According to the embodiment of the present application, the handprint collection system 200 may further include a light-shielding component, and the light-shielding component and at least part of the light sources in the lighting system are respectively located on opposite sides of the handprint collection area.
参见图15,示出了遮光部件260,其与照明系统中至少部分光源分别位于手纹采集区300的相对的两侧。其中,遮光部件位于手纹采集区的上方,照明系统中至少部分光源位于手纹采集区的下方。通过遮光部件260可以防止光源发射的光照到外面影响用户体验,同时还可以防止外面的杂散光照射到手纹采集区里面进而干扰手纹的采集。Referring to FIG. 15 , it shows a shading member 260 , which is located on opposite sides of the handprint collection area 300 with at least part of the light sources in the lighting system. Wherein, the shading component is located above the handprint collection area, and at least part of the light sources in the lighting system are located below the handprint collection area. The shading component 260 can prevent the light emitted by the light source from reaching the outside to affect the user experience, and at the same time prevent the stray light from the outside from shining into the handprint collection area to interfere with the collection of the handprints.
在手纹采集系统包括遮光部件和透光板的情况下,手纹采集区可以是位于遮光部件和透光板之间的区域的至少一部分。In the case where the handprint collection system includes a light-shielding component and a light-transmitting plate, the handprint collection area may be at least a part of the area between the light-shielding component and the light-transmitting plate.
图16示出根据本申请一个实施例的手纹采集系统200的部分结构及相关的手指和手纹采集区的示意图。需注意,图16是一种示意图,并不表示各部件的实际形状或大小。参见图16,示出遮光部件260和透光板270,手纹采集区300设置在二者之间。此外,图16还示出手纹采集区300两侧的两组辅助光源2281和2282以及手纹采集区300的指尖位置附近的辅助光源2283。在图16所示示例中,手纹采集区300两侧的两组辅助光源2281和2282各自为一个灯带,手纹采集区300的指尖位置附近的辅助光源2283为点光源,但是这仅是示例,辅助光源可以有其他实现形式。此外,需注意,为了方便区分,在图16中,将手指400用虚线表示。Fig. 16 shows a schematic diagram of a partial structure of a handprint collection system 200 and related fingers and handprint collection areas according to an embodiment of the present application. It should be noted that FIG. 16 is a schematic diagram and does not represent the actual shape or size of each component. Referring to FIG. 16 , it shows a light-shielding member 260 and a light-transmitting plate 270 , and a handprint collection area 300 is disposed therebetween. In addition, FIG. 16 also shows two groups of auxiliary light sources 2281 and 2282 on both sides of the handprint collection area 300 and an auxiliary light source 2283 near the fingertip position of the handprint collection area 300 . In the example shown in Figure 16, two groups of auxiliary light sources 2281 and 2282 on both sides of the handprint collection area 300 are respectively a light strip, and the auxiliary light source 2283 near the fingertip position of the handprint collection area 300 is a point light source, but this is only is an example, the auxiliary light source may have other implementation forms. In addition, it should be noted that, in FIG. 16 , the finger 400 is represented by a dotted line for convenience of distinction.
根据本申请实施例,在手纹采集系统还包括上述限位部件250的情况下,手纹采集系统还可以包括遮光部件260,遮光部件260与照明系统分别位于手纹采集区的相对的两侧;限位部件250还包括连接入口和止挡部的连接部,连接部的上表面与遮光部的下表面贴合。According to the embodiment of the present application, in the case that the handprint collection system further includes the above-mentioned limiting part 250, the handprint collection system may also include a light-shielding part 260, and the light-shielding part 260 and the lighting system are respectively located on opposite sides of the handprint collection area. ; The limiting component 250 also includes a connecting portion connecting the inlet and the stop portion, and the upper surface of the connecting portion is attached to the lower surface of the light-shielding portion.
参见图15,示出限位部件250中的连接入口252和止挡部254的连接部256。该连接部256的上表面可以与遮光部260的下表面贴合,这样可以更好地防止光源发射的光散逸到外面以及更好地防止外面的杂散光照射到手纹采集区。Referring to FIG. 15 , a connection portion 256 connecting the inlet 252 and the stop portion 254 in the limiting member 250 is shown. The upper surface of the connecting part 256 can be attached to the lower surface of the light shielding part 260, which can better prevent the light emitted by the light source from escaping to the outside and better prevent outside stray light from irradiating the handprint collection area.
根据本申请另一方面,提供一种利用上述手纹采集系统200实现的手纹采集方法。该手纹采集方法包括:在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时,通过一个或多个图像采集装置采集手纹采集区的图像,以获得一个或多个图像采集装置采集到的手纹图像,使得每个图像采集装置采集到的手纹图像中,至少一个手纹图像是在蓝色光源组中存在发光强度不为0的蓝色光源的情况下采集的,至少一个手纹图 像是在绿色光源组中存在发光强度不为0的绿色光源的情况下采集的;其中,蓝色光源组包括一个或多个蓝色光源,绿色光源组包括一个或多个绿色光源,蓝色发光方案用于表征各蓝色光源的发光强度,绿色发光方案用于表征各绿色光源的发光强度;在进行采集时,至少有一个光源的发光强度不为0。According to another aspect of the present application, a handprint collection method implemented by the above-mentioned handprint collection system 200 is provided. The handprint collection method includes: when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme, one or more image collection devices are used to collect images of the handprint collection area to obtain one or more The handprint images collected by each image acquisition device, so that in the handprint images collected by each image acquisition device, at least one handprint image is in the case where there is a blue light source with a luminous intensity other than 0 in the blue light source group Collected, at least one handprint For example, it is collected when there is a green light source with a luminous intensity other than 0 in the green light source group; wherein, the blue light source group includes one or more blue light sources, the green light source group includes one or more green light sources, and the blue light source group includes one or more green light sources. The luminous scheme is used to characterize the luminous intensity of each blue light source, and the green luminous scheme is used to characterize the luminous intensity of each green light source; when collecting, the luminous intensity of at least one light source is not 0.
结合上文描述可以理解照明系统以及图像采集装置的结构及布置方式,此处不赘述。The structure and arrangement of the lighting system and the image acquisition device can be understood in combination with the above description, and will not be repeated here.
如上所述,照明系统可以按照光源颜色分为两个组,即蓝色光源组和绿色光源组。蓝色光源组可以包括照明系统中的所有蓝色光源,绿色光源组可以包括照明系统中的所有绿色光源。As mentioned above, the lighting system can be divided into two groups according to the colors of the light sources, namely, the blue light source group and the green light source group. The blue light source group may include all blue light sources in the lighting system, and the green light source group may include all green light sources in the lighting system.
对于手纹采集系统中的每个图像采集装置来说,只需保证其采集到的手纹图像中至少一个手纹图像是在蓝色光源组中存在发光强度不为0的蓝色光源的情况下采集的,至少一个手纹图像是在绿色光源组中存在发光强度不为0的绿色光源的情况下采集的即可,所有图像采集装置采集图像的时序以及照明系统中所有光源发光的时序都可以按照需要设置。For each image acquisition device in the handprint acquisition system, it is only necessary to ensure that at least one of the handprint images collected by it is the case where there is a blue light source with a luminous intensity other than 0 in the blue light source group At least one handprint image is collected when there is a green light source with a luminous intensity not equal to 0 in the green light source group. The time sequence of image collection by all image acquisition devices and the light emission time sequence of all light sources in the lighting system are the same. Can be set as required.
上述在蓝色光源发光强度不为0的情况下采集到的“至少一个手纹图像”与在绿色光源发光强度不为0的情况下采集到的“至少一个手纹图像”中,可以有一个或多个手纹图像是相同的。例如,对于任一图像采集装置X来说,其可以共采集两个手纹图像,第一个是在至少一个蓝色光源的发光强度不为0且所有绿色光源的发光强度均为0的情况下采集到的,第二个是在至少一个绿色光源的发光强度不为0且所有蓝色光源的发光强度均为0的情况下采集到的。又例如,图像采集装置X可以仅采集一个手纹图像,该手纹图像可以是在至少一个蓝色光源的发光强度不为0且至少一个绿色光源的发光强度不为0的情况下采集到的。又例如,图像采集装置X还可以采集三个手纹图像,第一个是在至少一个蓝色光源的发光强度不为0且所有绿色光源的发光强度均为0的情况下采集到的,第二个是在至少一个蓝色光源的发光强度不为0且至少一个绿色光源的发光强度不为0的情况下采集到的,第三个是在至少一个绿色光源的发光强度不为0且所有蓝色光源的发光强度均为0的情况下采集到。图像采集装置X的以上这些采集方案都是可行的,当然,还可以采集其他数量的图像并可以采用其他发光方式,此处不再赘述。Among the "at least one handprint image" collected when the luminous intensity of the blue light source is not 0 and the "at least one handprint image" collected when the luminous intensity of the green light source is not 0, there may be one or multiple handprint images are the same. For example, for any image acquisition device X, it can acquire two handprint images in total. The first one is when the luminous intensity of at least one blue light source is not 0 and the luminous intensity of all green light sources is 0. The second one is collected when the luminous intensity of at least one green light source is not 0 and the luminous intensity of all blue light sources is 0. For another example, the image acquisition device X may only acquire one handprint image, and the handprint image may be acquired when the luminous intensity of at least one blue light source is not zero and the luminous intensity of at least one green light source is not zero. . For another example, the image acquisition device X can also acquire three handprint images, the first one is acquired when the luminous intensity of at least one blue light source is not 0 and the luminous intensity of all green light sources is 0, the second The second is collected when the luminous intensity of at least one blue light source is not 0 and the luminous intensity of at least one green light source is not 0, and the third is collected when the luminous intensity of at least one green light source is not 0 and all Collected when the luminous intensity of the blue light source is all 0. The above collection schemes of the image collection device X are all feasible, and of course, other numbers of images can be collected and other lighting methods can be used, which will not be repeated here.
对于一个或多个图像采集装置的每次采集来说,该次采集均与一个蓝色发光方案及一个绿色发光方案相对应,该次采集所获得的手纹图像也与这一个蓝色发光方案及这一个绿色发光方案相对应。每个蓝色发光方案用于表征蓝色光源组中的各光源的发光强度,每个绿色发光方案用于表征绿色光源组中的各光源的发光强度。For each acquisition of one or more image acquisition devices, the acquisition corresponds to a blue light-emitting scheme and a green light-emitting scheme, and the handprint image obtained in this acquisition is also corresponding to the blue light-emitting scheme. Corresponding to this green lighting scheme. Each blue light emission scheme is used to characterize the luminous intensity of each light source in the blue light source group, and each green light emission scheme is used to characterize the luminous intensity of each light source in the green light source group.
下面以蓝色光源组为例说明本文光源按照发光方案发光与图像采集之 间的关系,绿色光源组与蓝色光源组类似,本文不赘述。假设蓝色光源组中共包括三个蓝色光源B1、B2、B3,且每个光源的发光强度只分为两档,一个是强度达到最大,一个是强度为0,则理论上一共有(即最多有)八个蓝色发光方案。假设每个光源的发光强度达到最大用数字“1”表示,强度为0用数字“0”表示,这样上述八个蓝色发光方案可以表示为000、001、010、100、011、101、110、111。对于图像采集装置来说,可以采用这八个蓝色发光方案中的部分或全部发光方案来依次发光,并在每个发光方案下采集对应的手纹图像。例如,对图像采集装置来说,共采用六个蓝色发光方案按任意顺序发光,蓝色光源组每次按照任一蓝色发光方案发光时,一个或多个图像采集装置中的至少部分图像采集装置采集对应的手纹图像。需注意,同一蓝色发光方案可以出现一次或多次,例如上述六个蓝色发光方案中的任一蓝色发光方案P出现三次,其他蓝色发光方案均出现一次,则最后可以采集获得八个图像,其中三个图像所对应的发光方案均为蓝色发光方案P。The following takes the blue light source group as an example to illustrate the light source in this paper according to the light emission scheme and the image acquisition. The relationship between the green light source group and the blue light source group is similar, so this article will not go into details. Assuming that the blue light source group includes three blue light sources B1, B2, and B3, and the luminous intensity of each light source is only divided into two levels, one is when the intensity reaches the maximum, and the other is when the intensity is 0, then theoretically there are (ie There are up to) eight blue glow schemes. Assuming that the maximum luminous intensity of each light source is represented by the number "1", and the intensity of 0 is represented by the number "0", so the above eight blue light-emitting schemes can be expressed as 000, 001, 010, 100, 011, 101, 110 , 111. For the image acquisition device, some or all of the eight blue light-emitting schemes can be used to sequentially emit light, and a corresponding handprint image can be collected under each light-emitting scheme. For example, for the image acquisition device, a total of six blue light-emitting schemes are used to emit light in any order. When the blue light source group emits light according to any blue light-emitting scheme each time, at least part of the images in one or more image acquisition devices The collecting device collects the corresponding handprint image. It should be noted that the same blue light-emitting scheme can appear one or more times. For example, any blue light-emitting scheme P among the above-mentioned six blue light-emitting schemes appears three times, and other blue light-emitting schemes appear once. In the end, eight images, and the emission schemes corresponding to the three images are blue emission scheme P.
需注意,对于图像采集装置来说,其每次采集时,无论采用何种的蓝色发光方案和绿色发光方案发光,都至少有一个光源的发光强度不为0。It should be noted that, for the image acquisition device, no matter which blue light emitting scheme or green light emitting scheme is used to emit light, the luminous intensity of at least one light source is not 0 during each acquisition.
虽然上文以每个光源的发光强度只分为两档为例进行描述,但是可以理解,每个光源的发光强度可以具有更多的档位,且任意两个不同光源的发光强度档位数可以相同也可以不同,这些均可以任意设置。在蓝/绿色光源的发光强度具有更多档位的情况下,蓝/绿色发光方案的最大可能数量也可以增大,但是最终用于为任一图像采集装置补光的蓝/绿色发光方案可以按照需要选择,其可以包括所有可能的蓝/绿色发光方案,也可以仅包括所有可能的蓝/绿色发光方案中的一部分。Although the luminous intensity of each light source is only divided into two levels for description above, it can be understood that the luminous intensity of each light source can have more levels, and the number of luminous intensity levels of any two different light sources They can be the same or different, and these can be set arbitrarily. In the case that the luminous intensity of the blue/green light source has more gears, the maximum possible number of blue/green luminous schemes can also be increased, but the blue/green luminous schemes finally used to supplement light for any image acquisition device can be It can be selected as required, and it can include all possible blue/green light emitting schemes, or only a part of all possible blue/green light emitting schemes.
根据本申请实施例的手纹采集方法,对于每个图像采集装置来说,其采集到的手纹图像至少有一个是在蓝色光源发光强度不为0的情况下采集的,至少有一个是在绿色光源发光强度不为0的情况下采集的。因此,在该方法中,在每个图像采集装置采集图像时,可以采用蓝色光源和绿色光源一起为图像采集装置补光。这可以扩大该手纹采集方法对不同皮肤状态的适用性,从而有利于兼顾多种人群的采集需求。According to the handprint collection method of the embodiment of the present application, for each image collection device, at least one of the collected handprint images is collected when the luminous intensity of the blue light source is not 0, and at least one is Collected when the luminous intensity of the green light source is not 0. Therefore, in this method, when each image capture device captures an image, the blue light source and the green light source can be used together to supplement light for the image capture device. This can expand the applicability of the handprint collection method to different skin states, thereby helping to take into account the collection needs of various groups of people.
根据本申请实施例,上述采集包括多次,多次采集中至少两次采集对应的蓝色发光方案不同,和/或,多次采集中至少两次采集对应的绿色发光方案不同。According to an embodiment of the present application, the above collection includes multiple times, and at least two of the multiple collections correspond to different blue light emission schemes, and/or, at least two of the multiple collections correspond to different green light emission schemes.
对于一个或多个图像采集装置来说,其采集手纹图像的操作可以执行多次。执行任意两次不同采集操作的图像采集装置可以相同,也可以不同。在控制照明系统发光时,可以使得上述多次采集中至少两次采集所对应的发光方案发生变化,发生变化的发光方案可以是蓝色发光方案,也可以是绿色发光方案,还可以蓝色发光方案和绿色发光方案都发生变化。在蓝色 发光方案和绿色发光方案二者均发生变化的情况下,蓝色发光方案发生变化所对应的至少两次采集和绿色发光方案发生变化所对应的至少两次采集可以完全不同、完全相同或者仅有部分采集相同。For one or more image acquisition devices, the operation of acquiring handprint images can be performed multiple times. The image acquisition devices performing any two different acquisition operations may be the same or different. When controlling the lighting system to emit light, the lighting scheme corresponding to at least two of the above-mentioned multiple collections can be changed. The changed lighting scheme can be a blue lighting scheme, a green lighting scheme, or a blue lighting scheme. Both the scheme and the green glow scheme change. in blue When both the light-emitting scheme and the green light-emitting scheme are changed, the at least two acquisitions corresponding to the change of the blue light-emitting scheme and the at least two acquisitions corresponding to the change of the green light-emitting scheme may be completely different, completely the same, or only Part of the collection is the same.
蓝色发光方案不同的至少两次采集可以是连续的至少两次采集,也可以是部分或全部不连续的至少两次采集。类似地,绿色发光方案不同的至少两次采集可以是连续的至少两次采集,也可以是部分或全部不连续的至少两次采集。The at least two collections with different blue light emitting schemes may be continuous at least two collections, or may be partially or completely discontinuous at least two collections. Similarly, the at least two acquisitions with different green light emission schemes may be at least two acquisitions that are continuous, or at least two acquisitions that are partly or completely discontinuous.
手纹采集系统面对的采集情况比较复杂且会随时变化,例如不同人的皮肤状态不同,不同人放置目标部位的位置不同,即使是同一人,其不同时刻的皮肤状态及放置目标部位的位置也可能发生变化。而针对不同的采集情况,可能获得质量比较好的手纹图像所需的发光情况也会不同。如果在图像采集装置采集手纹图像的过程中,发光方案稳定不变,只能采集到单一质量的手纹图像,如果当前的发光方案并不适应当前用户的采集情况,可能导致最后获得的手纹采集效果不理想。因此,在图像采集装置采集手纹图像的过程中,使发光方案发生一定变化,这样可以使得采集到的手纹图像的质量也发生一定变化,这一定程度上可以弥补单一手纹图像的局限性,有助于获得比较好的手纹采集效果。The collection situation faced by the handprint collection system is more complex and will change at any time. For example, different people have different skin conditions, and different people have different positions to place the target parts. Also subject to change. According to different acquisition conditions, the luminescence conditions required to obtain a better-quality handprint image may also be different. If the luminescence scheme is stable during the process of collecting handprint images by the image acquisition device, and only single-quality handprint images can be collected, if the current luminescence scheme is not suitable for the current user's collection situation, it may cause the final obtained handprint image The texture collection effect is not ideal. Therefore, in the process of collecting the handprint image by the image acquisition device, the light-emitting scheme is changed to a certain extent, so that the quality of the collected handprint image can also be changed to a certain extent, which can make up for the limitation of a single handprint image to a certain extent. , which helps to obtain a better handprint collection effect.
根据本申请实施例,上述采集包括多次,多次采集中每两次采集对应的蓝色发光方案不同,和/或,多次采集中每两次采集对应的绿色发光方案不同。According to the embodiment of the present application, the above collection includes multiple times, and the blue light emission schemes corresponding to every two collections in the multiple collections are different, and/or, the green light emission schemes corresponding to every two collections in the multiple collections are different.
上文已经描述了每两次采集对应的发光方案发生变化的实施例,可以结合上文描述理解本实施例的实现方式,此处不再赘述。每两次采集所对应的发光方案发生变化,这种方式的发光方案变化丰富,有助于更好地提高采集效果。The above has described the embodiment in which the lighting scheme corresponding to every two acquisitions changes, and the implementation manner of this embodiment can be understood in conjunction with the above description, and details are not repeated here. The luminescence scheme corresponding to every two acquisitions changes, and the luminescence scheme in this way is varied, which helps to better improve the acquisition effect.
根据本申请实施例,至少两次采集对应的蓝色发光方案或绿色发光方案相同。According to the embodiment of the present application, the blue light emission scheme or the green light emission scheme corresponding to at least two acquisitions are the same.
在至少两次采集中,可以使得蓝色发光方案不变,仅绿色发光方案变化,或者使得绿色发光方案不变,仅蓝色发光方案变化。采用本实施例,整体上来说发光方案的变化跨度较小,且方便基于不变的同一颜色发光方案将不同时刻采集的手纹图像对齐。In at least two acquisitions, the blue light-emitting scheme may be kept constant and only the green light-emitting scheme may be changed, or the green light-emitting scheme may be kept constant and only the blue light-emitting scheme be changed. Adopting this embodiment, generally speaking, the change span of the lighting scheme is small, and it is convenient to align the handprint images collected at different times based on the constant lighting scheme of the same color.
根据本申请实施例,不存在对应蓝色发光方案相同且绿色发光方案相同的两次采集。According to the embodiment of the present application, there are no two acquisitions corresponding to the same blue light emission scheme and the same green light emission scheme.
对于两次采集来说,如果蓝色发光方案和绿色发光方案均完全相同,那么会采集到质量基本一致的两个手纹图像,这样做意义不大。对于图像采集装置来说,使得其所有采集操作分别对应不同的蓝色发光方案和/或绿色发光方案,这样可以在图像采集装置的整个采集过程中,尽可能多地变换发光方案,获得更多不同质量的手纹图像。这有助于提高更适配当前采 集情况的发光方案出现的概率,有助于提高获得高质量手纹图像的概率,进而有助于提高最终的手纹采集效果。For the two collections, if the blue light-emitting scheme and the green light-emitting scheme are exactly the same, then two handprint images with basically the same quality will be collected, which is of little significance. For the image acquisition device, all its acquisition operations correspond to different blue light-emitting schemes and/or green light-emitting schemes, so that during the entire acquisition process of the image acquisition device, as many light-emitting schemes as possible can be changed to obtain more Handprint images of different qualities. This helps to improve the The probability of occurrence of the lighting scheme in the set situation helps to improve the probability of obtaining high-quality handprint images, and then helps to improve the final handprint collection effect.
根据本申请实施例,在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时,通过一个或多个图像采集装置采集手纹采集区的图像包括:在第一时刻,在蓝色光源组以第一蓝色发光方案发光、绿色光源组以第一绿色发光方案发光时,通过一个或多个图像采集装置中的至少部分图像采集装置采集手纹采集区的图像;在第二时刻,在蓝色光源组以第二蓝色发光方案发光、绿色光源组以第二绿色发光方案发光时,通过一个或多个图像采集装置中的至少部分图像采集装置采集手纹采集区的图像,其中,第一蓝色发光方案和第二蓝色发光方案选自蓝色发光方案集合,第一绿色发光方案和第二绿色发光方案选自绿色发光方案集合;蓝色发光方案集合中包括w种蓝色发光方案,其中w的最大值为2k1,在2k1种蓝色发光方案中,任一蓝色光源的发光强度分为0和非0两档;绿色发光方案集合中包括p种绿色发光方案,其中p的最大值为2k2,在2k2种绿色发光方案中,任一绿色光源的发光强度分为0和非0两档;k1是照明系统中的蓝色光源的总数目,k2是照明系统中的绿色光源的总数目;w为大于或等于2的整数和/或p为大于或等于2的整数;第一蓝色发光方案与第二蓝色发光方案不同,和/或,第一绿色发光方案与第二绿色发光方案不同。According to the embodiment of the present application, when the blue light source group emits light in a blue light-emitting scheme, and the green light source group emits light in a green light-emitting scheme, collecting images of the handprint collection area by one or more image acquisition devices includes: at the first moment, When the blue light source group emits light with the first blue light-emitting scheme, and the green light source group emits light with the first green light-emitting scheme, at least part of the image capture devices in the one or more image capture devices are used to collect images of the handprint collection area; At the second moment, when the blue light source group emits light with the second blue light-emitting scheme, and the green light source group emits light with the second green light-emitting scheme, the handprint collection area is collected by at least part of the image collection devices in one or more image collection devices , wherein the first blue light-emitting scheme and the second blue light-emitting scheme are selected from the set of blue light-emitting schemes, the first green light-emitting scheme and the second green light-emitting scheme are selected from the set of green light-emitting schemes; in the set of blue light-emitting schemes Including w kinds of blue lighting schemes, where the maximum value of w is 2 k1 , in the 2 k1 blue lighting schemes, the luminous intensity of any blue light source is divided into two levels of 0 and non-zero; the set of green lighting schemes includes p kinds of green lighting schemes, where the maximum value of p is 2 k2 , in the 2 k2 kinds of green lighting schemes, the luminous intensity of any green light source is divided into two levels of 0 and non-0; k1 is the blue light source in the lighting system The total number, k2 is the total number of green light sources in the lighting system; w is an integer greater than or equal to 2 and/or p is an integer greater than or equal to 2; the first blue light-emitting scheme is different from the second blue light-emitting scheme, And/or, the first green lighting scheme is different from the second green lighting scheme.
在本实施例中,照明系统中的每个光源的发光强度只分为两档,一个是发光强度不为0,一个是发光强度为0。因此,蓝色发光方案集合中最多可以包括2k1种蓝色发光方案。类似地,绿色发光方案集合中最多可以包括2k2种绿色发光方案。In this embodiment, the luminous intensity of each light source in the lighting system is only divided into two levels, one is that the luminous intensity is not zero, and the other is that the luminous intensity is zero. Therefore, the set of blue light emitting schemes may include at most 2 k1 blue light emitting schemes. Similarly, the set of green lighting schemes may include at most 2 k2 green lighting schemes.
对于任一光源来说,其处于发光强度不为0的档位时,其发光强度的值可以是任意值,例如该光源的最大发光强度,或者该光源的最大发光强度的一半等等。For any light source, when it is in a gear whose luminous intensity is not 0, the value of its luminous intensity can be any value, such as the maximum luminous intensity of the light source, or half of the maximum luminous intensity of the light source, and so on.
如上所述,在图像采集装置采集图像的过程中,发光方案可以发生变化。在本实施例中,第一时刻和第二时刻可以分别按照不同的蓝色发光方案和/或不同的绿色发光方案发光。在第一时刻和第二时刻采集手纹图像的图像采集装置彼此之间可以完全相同,完全不同或者部分相同。As mentioned above, during the process of image acquisition by the image acquisition device, the lighting scheme may change. In this embodiment, the first moment and the second moment may emit light according to different blue light emitting schemes and/or different green light emitting schemes respectively. The image acquisition devices that acquire the handprint images at the first moment and the second moment may be completely the same, completely different or partially the same.
可以结合上文关于发光方案变化的描述理解本实施例,不再赘述。This embodiment can be understood in combination with the above description about the change of the lighting scheme, and details are not repeated here.
根据本申请实施例,第一蓝色发光方案与第二蓝色发光方案相同,或者,第一绿色发光方案与第二绿色发光方案相同。According to an embodiment of the present application, the first blue light emitting scheme is the same as the second blue light emitting scheme, or the first green light emitting scheme is the same as the second green light emitting scheme.
下面假设蓝色光源组包括三个蓝色光源B1、B2、B3,绿色光源组包括三个绿色光源G1、G2、G3,并假设每个光源的发光强度不为0用数字“1”表示,强度为0用数字“0”表示。在第一时刻,采用的第一蓝色发光方案可以是100,第一绿色发光方案可以是111;而在第二时刻,采用的第二蓝色发光方案仍然可以是100,第二绿色发光方案可以改为011。在本 实施例中,蓝色发光方案发生变化,则绿色发光方案不变,反之,绿色发光方案发生变化,则蓝色发光方案不变。Assume below that the blue light source group includes three blue light sources B1, B2, and B3, and the green light source group includes three green light sources G1, G2, and G3, and assume that the luminous intensity of each light source is not 0 and represented by the number "1", An intensity of 0 is represented by the number "0". At the first moment, the first blue light-emitting scheme adopted may be 100, and the first green light-emitting scheme may be 111; and at the second moment, the second blue light-emitting scheme adopted may still be 100, and the second green light-emitting scheme may be 100. Can be changed to 011. in this In the embodiment, if the blue light-emitting scheme changes, the green light-emitting scheme remains unchanged; otherwise, if the green light-emitting scheme changes, the blue light-emitting scheme remains unchanged.
上文描述了至少两次采集中,蓝色发光方案和/或绿色发光方案不变的实施例的优势,此处不再赘述。The advantages of the embodiment in which the blue light emitting scheme and/or the green light emitting scheme are unchanged in at least two acquisitions have been described above, and will not be repeated here.
根据本申请实施例,在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时,通过一个或多个图像采集装置采集手纹采集区的图像还包括:在第三时刻,在蓝色光源组以第三蓝色发光方案发光、绿色光源组以第三绿色发光方案发光时,通过一个或多个图像采集装置中的至少部分图像采集装置采集手纹采集区的图像,其中,第三蓝色发光方案选自蓝色发光方案集合,第三绿色发光方案选自绿色发光方案集合;第三蓝色发光方案与第一蓝色发光方案、第二蓝色发光方案中至少一者不同;和/或,第三绿色发光方案与第一绿色发光方案、第二绿色发光方案中至少一者不同;不存在对应蓝色发光方案相同且绿色发光方案相同的两次采集。According to an embodiment of the present application, when the blue light source group emits light in a blue light-emitting scheme and the green light source group emits light in a green light-emitting scheme, collecting images of the handprint collection area by one or more image acquisition devices further includes: at the third moment , when the blue light source group emits light with the third blue light-emitting scheme, and the green light source group emits light with the third green light-emitting scheme, at least part of the image capture devices in the one or more image capture devices are used to capture images of the handprint collection area, Wherein, the third blue light-emitting scheme is selected from the set of blue light-emitting schemes, and the third green light-emitting scheme is selected from the set of green light-emitting schemes; the third blue light-emitting scheme is at least the same as the first blue light-emitting scheme and the second blue light-emitting scheme One of them is different; and/or, the third green light-emitting scheme is different from at least one of the first green light-emitting scheme and the second green light-emitting scheme; there are no two acquisitions corresponding to the same blue light-emitting scheme and the same green light-emitting scheme.
第三时刻采集手纹图像的图像采集装置与第一时刻采集手纹图像的图像采集装置可以完全相同,完全不同或者部分相同。类似地,第三时刻采集手纹图像的图像采集装置与第二时刻采集手纹图像的图像采集装置可以完全相同,完全不同或者部分相同。The image acquisition device for acquiring the handprint image at the third moment may be completely the same, completely different or partially the same as the image acquisition device for acquiring the handprint image at the first moment. Similarly, the image acquisition device for acquiring the handprint image at the third moment may be completely the same, completely different or partly the same as the image acquisition device for acquiring the handprint image at the second moment.
在本实施例中,增加第三次采集。在第三次采集中,可以使得蓝色发光方案或者绿色发光方案相对前两次采集中的至少一次采集发生变化。In this embodiment, a third acquisition is added. In the third acquisition, the blue light emission scheme or the green light emission scheme may be changed relative to at least one of the first two acquisitions.
根据本申请实施例,发光方案同时满足第一条件和第二条件;第一条件包括:第一蓝色发光方案与第二蓝色发光方案相同,或者,第一绿色发光方案与第二绿色发光方案相同;第二条件包括:第三蓝色发光方案与第一蓝色发光方案相同,或者,第三蓝色发光方案与第二蓝色发光方案相同,或者,第三绿色发光方案与第一绿色发光方案相同,或者,第三绿色发光方案与第二绿色发光方案相同。According to the embodiment of the present application, the lighting scheme satisfies both the first condition and the second condition; the first condition includes: the first blue lighting scheme is the same as the second blue lighting scheme, or the first green lighting scheme is the same as the second green lighting scheme The scheme is the same; the second condition includes: the third blue light-emitting scheme is the same as the first blue light-emitting scheme, or the third blue light-emitting scheme is the same as the second blue light-emitting scheme, or the third green light-emitting scheme is the same as the first blue light-emitting scheme The green light emitting scheme is the same, or the third green light emitting scheme is the same as the second green light emitting scheme.
在第一蓝色发光方案与第二蓝色发光方案相同的情况下,第三蓝色发光方案与第一蓝色发光方案和第二蓝色发光方案不同,第一绿色发光方案与第二绿色发光方案不同,第三绿色发光方案与第一绿色发光方案或第二绿色发光方案相同。反之,在第一绿色发光方案与第二绿色发光方案相同的情况下,第三绿色发光方案与第一绿色发光方案和第二绿色发光方案不同,第一蓝色发光方案与第二蓝色发光方案不同,第三蓝色发光方案与第一蓝色发光方案或第二蓝色发光方案相同。In the case that the first blue light-emitting scheme is the same as the second blue light-emitting scheme, the third blue light-emitting scheme is different from the first blue light-emitting scheme and the second blue light-emitting scheme, and the first green light-emitting scheme is different from the second green light-emitting scheme. The lighting schemes are different, and the third green lighting scheme is the same as the first green lighting scheme or the second green lighting scheme. Conversely, when the first green light-emitting scheme is the same as the second green light-emitting scheme, the third green light-emitting scheme is different from the first green light-emitting scheme and the second green light-emitting scheme, and the first blue light-emitting scheme is different from the second blue light-emitting scheme. The schemes are different, and the third blue light-emitting scheme is the same as the first blue light-emitting scheme or the second blue light-emitting scheme.
也就是说,在三次采集中,蓝色发光方案变化一次,绿色发光方案变化一次。这样,便于采集更丰富发光情况下的手纹图像。That is, in three acquisitions, the blue emission scheme is changed once, and the green emission scheme is changed once. In this way, it is convenient to collect handprint images under richer luminous conditions.
根据本申请实施例,一个或多个图像采集装置中的图像采集装置的数目大于1,每次采集使用全部的图像采集装置。 According to the embodiment of the present application, the number of image acquisition devices in one or more image acquisition devices is greater than 1, and all image acquisition devices are used for each acquisition.
每次采集时,均使所有图像采集装置一起采集手纹图像,这样只需按照所有需要的发光方案依次发光结束,即可以完成所有采集。这种采集方案效率比较高。All image acquisition devices are used to collect handprint images together during each collection, so that all collections can be completed only by sequentially emitting light according to all required lighting schemes. This collection scheme is more efficient.
根据本申请实施例,一个或多个图像采集装置中的图像采集装置的数目大于1,多次采集中至少两次采集使用的图像采集装置不完全相同。According to the embodiment of the present application, the number of image acquisition devices in the one or more image acquisition devices is greater than 1, and the image acquisition devices used in at least two acquisitions among the multiple acquisitions are not exactly the same.
例如,参见图2,假设位于中心的图像采集装置属于第一组图像采集装置,位于两侧的两个图像采集装置属于第二组图像采集装置。可以在第一时刻通过第一组图像采集装置采集图像,在第二时刻通过第二组图像采集装置采集图像。虽然分开在不同时刻用不同的图像采集装置来采集图像,但是不同的图像采集装置采集到的手纹图像仍然是可以进行拼接的。这种分开采集的方案便于避免其他图像采集装置所对应的光源妨碍当前图像采集装置的采集。For example, referring to FIG. 2 , it is assumed that the image acquisition device at the center belongs to the first group of image acquisition devices, and the two image acquisition devices at the two sides belong to the second group of image acquisition devices. Images may be collected by the first group of image acquisition devices at the first moment, and images may be collected by the second group of image acquisition devices at the second moment. Although different image acquisition devices are used to collect images at different times, the handprint images collected by different image acquisition devices can still be spliced. This separate acquisition solution is convenient to avoid that the light sources corresponding to other image acquisition devices hinder the acquisition of the current image acquisition device.
根据本申请实施例,一个或多个图像采集装置包括第一图像采集装置、第二图像采集装置和第三图像采集装置,其中,第一图像采集装置的光轴与手纹采集区的朝向一个或多个图像采集装置的平面垂直,第二图像采集装置的光轴与第一图像采集装置的光轴成第一预设夹角,第三图像采集装置的光轴与第一图像采集装置的光轴成第二预设夹角;照明系统包括一个或多个光源集合,每个光源集合包括至少一个蓝色光源和至少一个绿色光源,一个或多个光源集合与一个或多个图像采集装置一一对应;在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时,通过一个或多个图像采集装置采集手纹采集区的图像包括:在第一时刻,在蓝色光源组以第一蓝色发光方案发光、绿色光源组以第一绿色发光方案发光时,通过第一组图像采集装置采集手纹采集区的图像;在第二时刻,在蓝色光源组以第二蓝色发光方案发光、绿色光源组以第二绿色发光方案发光时,通过第二组图像采集装置采集手纹采集区的图像,其中,第一组图像采集装置为第一图像采集装置和角度图像采集装置中一者,第二组图像采集装置为第一图像采集装置和角度图像采集装置中另一者;角度图像采集装置包括第二图像采集装置和第三图像采集装置,第一蓝色发光方案为仅使蓝色光源组中与第一时刻进行采集的图像采集装置对应的蓝色光源发光强度不为0;第一绿色发光方案为仅使绿色光源组中与第一时刻进行采集的图像采集装置对应的绿色光源发光强度不为0;第二蓝色发光方案为仅使蓝色光源组中与第二时刻进行采集的图像采集装置对应的蓝色光源发光强度不为0;第二绿色发光方案为仅使绿色光源组中与第二时刻进行采集的图像采集装置对应的绿色光源发光强度不为0。According to an embodiment of the present application, one or more image acquisition devices include a first image acquisition device, a second image acquisition device and a third image acquisition device, wherein the optical axis of the first image acquisition device is aligned with the direction of the handprint acquisition area Or the planes of multiple image acquisition devices are vertical, the optical axis of the second image acquisition device forms a first preset angle with the optical axis of the first image acquisition device, the optical axis of the third image acquisition device and the first image acquisition device The optical axis forms a second preset angle; the lighting system includes one or more sets of light sources, each set of light sources includes at least one blue light source and at least one green light source, one or more sets of light sources and one or more image acquisition devices One-to-one correspondence; when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme, the images of the handprint collection area collected by one or more image acquisition devices include: at the first moment, in the blue When the color light source group emits light with the first blue light-emitting scheme, and the green light source group emits light with the first green light-emitting scheme, the image of the handprint collection area is collected by the first group of image acquisition devices; When the second blue light-emitting scheme is used to emit light, and the green light source group emits light with the second green light-emitting scheme, the image of the handprint collection area is collected by the second group of image acquisition devices, wherein the first group of image acquisition devices is the first image One of the acquisition device and the angle image acquisition device, the second group of image acquisition devices is the other in the first image acquisition device and the angle image acquisition device; the angle image acquisition device includes the second image acquisition device and the third image acquisition device , the first blue light-emitting scheme is to make only the luminous intensity of the blue light source in the blue light source group corresponding to the image acquisition device that collects at the first moment is not 0; the first green light-emitting scheme is to make only the green light source group and the second The luminous intensity of the green light source corresponding to the image acquisition device that collects at one moment is not 0; the second blue light emission scheme is to only make the luminous intensity of the blue light source corresponding to the image acquisition device that collects at the second moment in the blue light source group not equal to 0. is 0; the second green light-emitting solution is to only make the luminous intensity of the green light source in the green light source group corresponding to the image acquisition device that collects at the second moment not be 0.
上文已经描述了第一图像采集装置、第二图像采集装置和第三图像采集装置的布置方式,此处不再赘述。此外,上文还描述了照明系统包括光源集合且光源集合与图像采集装置一一对应的实施例,此处不再赘述。 The arrangements of the first image capture device, the second image capture device and the third image capture device have been described above, and will not be repeated here. In addition, the above also describes the embodiment in which the lighting system includes a light source set and the light source set corresponds to the image acquisition device one by one, so details will not be repeated here.
如上所述,第一图像采集装置、第二图像采集装置和第三图像采集装置可以分为两组,位于中心的图像采集装置为一组,位于两侧的两个图像采集装置为另一组。任一组图像采集装置采集图像时,可以仅与之对应的光源集合中的光源的发光强度不为0,其他光源集合中的光源的发光强度均为0。As mentioned above, the first image acquisition device, the second image acquisition device and the third image acquisition device can be divided into two groups, the image acquisition device located in the center is one group, and the two image acquisition devices located on both sides are another group . When any group of image acquisition devices acquires images, only the luminous intensity of the light sources in the corresponding light source set may not be 0, and the luminous intensities of the light sources in other light source sets are all 0.
由于位于中心的光源集合与左右两侧的光源集合距离较近,比较容易互相干扰到各自对应的图像采集装置的图像采集,因此比较可取的是使位于中心的光源集合与位于两侧的光源集合异步发光,即不同时发光。这样有助于获得更高精度的手纹图像。Since the light source set in the center is relatively close to the light source sets on the left and right sides, it is easier to interfere with the image acquisition of the respective corresponding image acquisition devices. Emits asynchronously, that is, do not emit light at the same time. This helps to obtain a higher-precision handprint image.
此外,位于两侧的光源集合彼此距离相对较远,因此可以可选地使位于两侧的光源集合同步发光,这样可以尽量缩短手纹采集的时间。当然,可选地,进一步使位于两侧的光源集合异步发光也是可行的。这样可以尽量避免邻近光源对图像采集装置的干扰。In addition, the light source sets on both sides are relatively far away from each other, so the light source sets on both sides can optionally be made to emit light synchronously, which can shorten the time for handprint collection as much as possible. Of course, optionally, it is also feasible to further enable the sets of light sources located on both sides to emit light asynchronously. In this way, the interference of adjacent light sources on the image acquisition device can be avoided as far as possible.
上述使第一组光源集合和第二组光源集合异步发光的方案可以应用在第一组图像采集装置所对应的光源影响第二组图像采集装置的图像采集和/或第二组图像采集装置所对应的光源影响第一组图像采集装置的图像采集的情况下。The above scheme of making the first group of light source sets and the second group of light source sets emit light asynchronously can be applied when the light source corresponding to the first group of image acquisition devices affects the image acquisition of the second group of image acquisition devices and/or the second group of images In the case where the light source corresponding to the acquisition device affects the image acquisition of the first group of image acquisition devices.
示例性而非限制性地,第一时刻和第二时刻之间的时间差可以小于预设时间阈值。预设时间阈值可以根据需要设定为任何合适的值,本申请不对此进行限制。例如,预设时间阈值可以是25ms。For example and not limitation, the time difference between the first moment and the second moment may be smaller than a preset time threshold. The preset time threshold may be set to any appropriate value as required, and this application is not limited thereto. For example, the preset time threshold may be 25ms.
考虑到手指或手掌的抖动,为降低后续将不同图像采集装置的图像拼接在一起时的困难,异步发光的两个光源集合之间的时间间隔尽量设置得短一些,这个很短的时间以不超过25ms为宜。这种异步发光方式特别适用于封装在一起的发光管模组。Considering the shaking of fingers or palms, in order to reduce the difficulty of splicing the images of different image acquisition devices together, the time interval between the two sets of light sources that emit light asynchronously should be set as short as possible. It is advisable to exceed 25ms. This asynchronous lighting method is especially suitable for packaged LED modules.
根据本申请实施例,在第一时刻,在蓝色光源组以第一蓝色发光方案发光、绿色光源组以第一绿色发光方案发光时,通过第一组图像采集装置采集手纹采集区的图像,包括:通过处理装置在第一时刻向第一组图像采集装置发送触发信号并向与第一组图像采集装置相对应的第一组光源集合发送发光控制信号;在第二时刻,在蓝色光源组以第二蓝色发光方案发光、绿色光源组以第二绿色发光方案发光时,通过第二组图像采集装置采集手纹采集区的图像,包括:通过处理装置在第二时刻向第二组图像采集装置发送触发信号并向与第二组图像采集装置相对应的第二组光源集合发送发光控制信号;其中,触发信号用于触发对应的图像采集装置开始采集图像,发光控制信号用于控制对应的光源集合开始以非0强度发光,第一时刻和第二时刻之间的时间差等于第一组图像采集装置的采集时长与预设延时时长之和。According to the embodiment of the present application, at the first moment, when the blue light source group emits light with the first blue light-emitting scheme, and the green light source group emits light with the first green light-emitting scheme, the handprint collection area is collected by the first group of image acquisition devices image, including: sending a trigger signal to the first group of image acquisition devices through the processing device at the first moment and sending a light emission control signal to the first group of light source sets corresponding to the first group of image acquisition devices; at the second moment , when the blue light source group emits light with the second blue light-emitting scheme, and the green light source group emits light with the second green light-emitting scheme, the image of the handprint collection area is collected by the second group of image acquisition devices, including: through the processing device at the second 2. Send a trigger signal to the second group of image acquisition devices at any time and send a light emission control signal to the second group of light source sets corresponding to the second group of image acquisition devices; wherein, the trigger signal is used to trigger the corresponding image acquisition device to start acquisition For images, the lighting control signal is used to control the corresponding set of light sources to start to emit light with a non-zero intensity, and the time difference between the first moment and the second moment is equal to the sum of the acquisition time of the first group of image acquisition devices and the preset delay time.
示例性而非限制性,可以通过处理装置240控制各图像采集装置的图 像采集以及各光源的发光。处理装置240可以通过向图像采集装置以及对应的光源集合同时发送触发信号及发光控制信号的方式来控制图像采集装置及对应的光源集合同步开始工作。Exemplary and non-limiting, can control the map of each image acquisition device by processing device 240 Image acquisition and illumination of each light source. The processing device 240 may control the image acquisition device and the corresponding light source set to start working synchronously by simultaneously sending a trigger signal and a light emission control signal to the image acquisition device and the corresponding light source set.
在不同组光源集合异步发光且不同组图像采集装置异步采集图像的实施例中,处理装置240可以首先通过同步发送触发信号及发光控制信号控制第一组图像采集装置及第一组光源集合开始工作,随后经过第一组图像采集装置的采集时长之后采集完成,随后延时一段时间(即预设延时时长)之后可以开始通过同步发送触发信号及发光控制信号控制第二组图像采集装置及第二组光源集合开始工作。图像采集装置的采集时长一般比较短,例如2ms。延时时长可以设置得较小,其与第一组图像采集装置的采集时长之和小于上述预设时间阈值。In an embodiment where different groups of light source sets emit light asynchronously and different groups of image acquisition devices asynchronously capture images, the processing device 240 may first control the first group of image acquisition devices and the first group of light source sets by synchronously sending trigger signals and light emission control signals. Start to work, and then the acquisition is completed after the acquisition time of the first group of image acquisition devices, and then after a delay for a period of time (that is, the preset delay time), you can start to control the second group of images by synchronously sending trigger signals and light control signals The collection device and the second group of light sources start to work. The acquisition time of the image acquisition device is generally relatively short, such as 2ms. The delay time can be set to be small, and the sum of the delay time and the acquisition time of the first group of image acquisition devices is less than the above-mentioned preset time threshold.
根据本申请实施例,照明系统包括一个或多个光源集合,每个光源集合包括至少一个蓝色光源和至少一个绿色光源,一个或多个光源集合与一个或多个图像采集装置一一对应,对于一个或多个光源集合中的任一光源集合,该光源集合中的蓝色光源和绿色光源同步发光,且该光源集合中的蓝色光源和绿色光源的发光时段在对应图像采集装置的有效曝光时段内。According to an embodiment of the present application, the lighting system includes one or more light source sets, each light source set includes at least one blue light source and at least one green light source, and the one or more light source sets correspond to one or more image acquisition devices one by one, For any light source set in one or more light source sets, the blue light source and the green light source in the light source set emit light synchronously, and the light-emitting periods of the blue light source and the green light source in the light source set are within the effective period of the corresponding image acquisition device. within the exposure period.
光源的发光时段是指该光源的发光强度不为0的时段。一个或多个图像采集装置中的任一图像采集装置的曝光方式可以采用全局曝光或者行曝光。在当前图像采集装置采用全局曝光的情况下,当前图像采集装置采集的每一帧图像中所有行的曝光时段是统一的,该统一的曝光时段可以称为有效曝光时段。与当前图像采集装置相对应的光源集合中的各光源的发光时段只需位于该有效曝光时段内即可。与当前图像采集装置相对应的光源集合中的光源的发光时段可以短于或等于全局曝光时的有效曝光时段。The luminous period of the light source refers to the period when the luminous intensity of the light source is not zero. The exposure mode of any image acquisition device in one or more image acquisition devices may adopt global exposure or line exposure. In the case that the current image acquisition device adopts global exposure, the exposure period of all lines in each frame of image captured by the current image acquisition device is uniform, and the uniform exposure period may be called an effective exposure period. The light-emitting period of each light source in the light source set corresponding to the current image acquisition device only needs to be within the effective exposure period. The light-emitting period of the light sources in the light source set corresponding to the current image capture device may be shorter than or equal to the effective exposure period during global exposure.
在当前图像采集装置采用行曝光的情况下,有效曝光时段可以是每帧图像中的所有行共享的曝光时段。行曝光可以进一步分为至少两种曝光方式,一种是进行初始时间同步的行曝光,一种是不进行初始时间同步的行曝光。In the case that the current image acquisition device adopts row exposure, the effective exposure period may be an exposure period shared by all the rows in each frame of image. Line exposure can be further divided into at least two exposure modes, one is line exposure with initial time synchronization, and the other is line exposure without initial time synchronization.
图17示出根据本申请一个实施例的图像采集和光源发光时序的示意图。图17所示的行曝光方式是经过初始时间同步的,即每个行的曝光的初始时间是一致的,只是结束时间不同。例如,全部曝光时间是15秒,第一行的曝光时间是第1-10秒,第二行的曝光时间是第1-11秒,第三行的曝光时间是第1-12秒……第六行的曝光时间是第1-15秒。此时的有效曝光时长是第1-10秒,因此可以将光源的发光时段设置在第1-10秒内。光源发光强度不为0的总时长可以等于或短于10秒。Fig. 17 shows a schematic diagram of image acquisition and light emitting sequence according to an embodiment of the present application. The row exposure mode shown in FIG. 17 is synchronized with the initial time, that is, the initial time of the exposure of each row is the same, but the end time is different. For example, the total exposure time is 15 seconds, the exposure time of the first line is 1-10 seconds, the exposure time of the second line is 1-11 seconds, the exposure time of the third line is 1-12 seconds... Exposure times for the six rows are 1-15 seconds. The effective exposure time at this time is 1-10 seconds, so the lighting period of the light source can be set within 1-10 seconds. The total duration during which the luminous intensity of the light source is not 0 may be equal to or shorter than 10 seconds.
不进行初始时间同步的行曝光只是初始时间与图17所示的示例不同,其他时间,例如每行曝光的结束时间以及数据输出时间是类似的。例如,如果不进行初始时间同步,且全部曝光时间是15秒的话,第一行的曝光时 间可以是第1-10秒,第二行的曝光时间是第2-11秒,第三行的曝光时间是第3-12秒……第六行的曝光时间是第6-15秒。此时的有效曝光时长是第6-10秒,因此可以将光源的发光时段设置在第6-10秒内。光源发光强度不为0的总时长可以等于或短于5秒。For row exposure without initial time synchronization, only the initial time is different from the example shown in FIG. 17 , and other times, such as the end time of each row exposure and data output time, are similar. For example, if the initial time synchronization is not performed, and the total exposure time is 15 seconds, the exposure time of the first row The exposure time of the second row is 2-11 seconds, the exposure time of the third row is 3-12 seconds... the exposure time of the sixth row is 6-15 seconds. The effective exposure time at this time is 6-10 seconds, so the lighting period of the light source can be set within 6-10 seconds. The total duration during which the luminous intensity of the light source is not 0 may be equal to or shorter than 5 seconds.
使光源集合中的蓝色光源和绿色光源的发光时段在对应图像采集装置的有效曝光时段内,这样便于图像采集装置采集到的图像上的不同位置(例如不同行)是在相同的光照条件下采集获得,从而保证采集的图像的成像条件的一致性,这有助于提高后续融合拼接或拼接融合的效果。The light-emitting period of the blue light source and the green light source in the light source set is within the effective exposure period of the corresponding image acquisition device, so that different positions (for example, different rows) on the image collected by the image acquisition device are under the same lighting conditions Acquisition is obtained, so as to ensure the consistency of the imaging conditions of the acquired images, which helps to improve the effect of subsequent fusion stitching or splicing fusion.
根据本申请实施例,照明系统包括一个或多个光源集合,每个光源集合包括至少一个蓝色光源和至少一个绿色光源,一个或多个光源集合与一个或多个图像采集装置一一对应,方法还包括:According to an embodiment of the present application, the lighting system includes one or more light source sets, each light source set includes at least one blue light source and at least one green light source, and the one or more light source sets correspond to one or more image acquisition devices one by one, Methods also include:
对于一个或多个图像采集装置中的每一个,For each of the one or more image capture devices,
对该图像采集装置采集的同一手纹图像进行蓝色通道和绿色通道的提取,以获得当前蓝色通道图像和当前绿色通道图像;Extracting the blue channel and the green channel from the same handprint image collected by the image acquisition device to obtain the current blue channel image and the current green channel image;
分别计算当前蓝色通道图像和当前绿色通道图像各自的对比度和/或亮度;Calculating the respective contrast and/or brightness of the current blue channel image and the current green channel image;
基于当前蓝色通道图像的对比度和/或亮度以及当前绿色通道图像的对比度和/或亮度,确定该图像采集装置所对应的光源集合的调整蓝光发光强度和调整绿光发光强度,调整蓝光发光强度和调整绿光发光强度使得该图像采集装置在调整蓝光发光强度下采集的后续蓝色通道图像和在调整绿光发光强度下采集的后续绿色通道图像的对比度和/或亮度一致;Based on the contrast and/or brightness of the current blue channel image and the contrast and/or brightness of the current green channel image, determine the adjusted blue luminous intensity and the adjusted green luminous intensity of the light source set corresponding to the image acquisition device, and adjust the blue luminous intensity. and adjusting the luminous intensity of the green light so that the contrast and/or brightness of the subsequent blue channel image collected by the image acquisition device under the adjusted luminous intensity of the blue light and the subsequent green channel image collected under the adjusted luminous intensity of the green light are consistent;
控制该图像采集装置所对应的光源集合中的蓝色光源按照调整蓝光发光强度向手纹采集区发射蓝光;Controlling the blue light source in the light source set corresponding to the image acquisition device to emit blue light to the handprint collection area according to the adjusted blue light luminous intensity;
控制该图像采集装置所对应的光源集合中的绿色光源按照调整绿光发光强度向手纹采集区发射绿光。The green light source in the light source set corresponding to the image acquisition device is controlled to emit green light to the handprint collection area according to the adjusted green light emission intensity.
下面以三个图像采集装置和三个光源集合为例说明本实施例。例如,可以实时提取这三个图像采集装置当前所采集到的图像中的蓝色通道和绿色通道的图像,并计算三个图像采集装置各自对应的蓝色通道图像和绿色通道图像的对比度和/或亮度。例如,可以通过计算蓝色通道图像的直方图以及绿色通道图像的直方图来得到各自对应的亮度值。随后,可以根据与三个图像采集装置一一对应的三个蓝色通道图像的对比度和/或亮度计算与三个图像采集装置一一对应的三个光源集合各自对应的调整蓝光发光强度。并且,可以根据与三个图像采集装置一一对应的三个绿色通道图像的对比度和/或亮度计算与三个图像采集装置一一对应的三个光源集合各自对应的调整绿光发光强度。随后,在下次发光时,控制三个光源集合中的每个光源集合按照上面计算的与自己对应的调整蓝光发光强度和调整绿光 发光强度发光。The present embodiment will be described below by taking three image acquisition devices and three light source sets as examples. For example, the images of the blue channel and the green channel in the images currently captured by the three image acquisition devices can be extracted in real time, and the contrast and/or the corresponding blue channel images and green channel images of the three image acquisition devices can be calculated. or brightness. For example, the respective brightness values can be obtained by calculating the histogram of the blue channel image and the histogram of the green channel image. Subsequently, the adjusted blue light luminous intensity corresponding to each of the three light source sets corresponding to the three image acquisition devices may be calculated according to the contrast and/or brightness of the three blue channel images corresponding to the three image acquisition devices. In addition, the adjusted green luminous intensity corresponding to each of the three light source sets corresponding to the three image acquisition devices can be calculated according to the contrast and/or brightness of the three green channel images corresponding to the three image acquisition devices. Then, when emitting light next time, control each of the three light source sets to adjust the blue light luminous intensity and adjust the green light according to the calculation above. Luminous intensity glows.
上述实施例可以独立地调整单通道的光源强度。通过对蓝绿光源的单独调整,可以使得不同采集位置下同一颜色的光源具有比较一致的对比度和/或亮度,进而方便后续将不同采集位置下的图像拼接在一起时获得较好的拼接效果。此外,通过独立调整单通道的光源强度的方案,可以基于皮肤针对光线的反应情况来调整当前的蓝绿光源的发光强度以更好地适应不同皮肤的状况,从而可以进一步适应更多人群。The above embodiments can independently adjust the light source intensity of a single channel. By adjusting the blue and green light sources separately, the light sources of the same color at different collection positions can have relatively consistent contrast and/or brightness, which facilitates the subsequent stitching of images at different collection positions to obtain a better stitching effect. In addition, by independently adjusting the light source intensity of a single channel, the luminous intensity of the current blue-green light source can be adjusted based on the skin's response to light to better adapt to different skin conditions, thereby further adapting to more people.
根据本申请实施例,在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时,通过一个或多个图像采集装置采集手纹采集区的图像包括:在任一目标发光时刻,在蓝色光源组以选定蓝色发光方案发光、绿色光源组以选定绿色发光方案发光时,通过一个或多个图像采集装置中的至少部分图像采集装置采集手纹采集区的图像;其中,选定蓝色发光方案选自蓝色发光方案集合,选定绿色发光方案选自绿色发光方案集合;蓝色发光方案集合中包括w种蓝色发光方案,其中,w∈{1,2,3,…,Nt1-1,Nt1},Nt1为通过从蓝色光源组中的第i个蓝色光源所对应的发光强度的Ni个档中选择一个档位的方式对蓝色光源组中的蓝色光源的发光强度的档位进行组合所获得的总组合数,Ni为第i个蓝色光源的发光强度的总档位数;i=1,2,3…,k1-1,k1;k1是照明系统中的蓝色光源的总数目;绿色发光方案集合中包括p种绿色发光方案,其中p∈{1,2,3,…,Nt2-1,Nt2},Nt2为通过从绿色光源组中的第j个绿色光源所对应的发光强度的Nj个档中选择一个档位的方式对绿色光源组中的绿色光源的发光强度的档位进行组合所获得的总组合数,Nj为第j个绿色光源的发光强度的总档位数;j=1,2,3…,k2-1,k2;k2是照明系统中的绿色光源的总数目。According to the embodiment of the present application, when the blue light source group emits light in a blue light-emitting scheme, and the green light source group emits light in a green light-emitting scheme, collecting images of the handprint collection area through one or more image acquisition devices includes: at any target light-emitting moment , when the blue light source group emits light with a selected blue light-emitting scheme, and the green light source group emits light with a selected green light-emitting scheme, at least part of the image capture devices in one or more image capture devices are used to collect images of the handprint collection area; Among them, the selected blue light-emitting scheme is selected from the set of blue light-emitting schemes, and the selected green light-emitting scheme is selected from the set of green light-emitting schemes; the set of blue light-emitting schemes includes w kinds of blue light-emitting schemes, where w∈{1,2 ,3,...,N t1 -1,N t1 }, N t1 is to select a gear from the N i gears of the luminous intensity corresponding to the i-th blue light source in the blue light source group. The total number of combinations obtained by combining the gears of the luminous intensity of the blue light sources in the color light source group, N i is the total number of gears of the luminous intensity of the i-th blue light source; i=1,2,3..., k1-1, k1; k1 is the total number of blue light sources in the lighting system; the set of green lighting schemes includes p green lighting schemes, where p∈{1,2,3,…,N t2 -1,N t2 }, N t2 is to combine the gears of the luminous intensity of the green light sources in the green light source group by selecting a gear from the N j gears of the luminous intensity corresponding to the jth green light source in the green light source group The total number of combinations obtained, N j is the total number of stalls of the luminous intensity of the jth green light source; j=1,2,3...,k2-1,k2; k2 is the total number of green light sources in the lighting system .
上文描述了蓝色光源和绿色光源的发光强度各自分为两个档位的情况下的一种实施例,但是这仅是示例。第i个蓝色光源的发光强度的总档位数可以表示为Ni,其中,任意两个不同的蓝色光源各自对应的Ni可以相等,也可以不相等,且任一蓝色光源所对应的Ni可以大于或等于2。例如,某一个蓝色光源的发光强度可以分为3档,另一个蓝色光源的发光强度可以分为4档。可以针对全部蓝色光源的发光强度的档位进行组合。例如,假设照明系统中共包括三个蓝色光源,第一个蓝色光源的发光强度分为2档,第二个蓝色光源的发光强度分为4档,第三个蓝色光源的发光强度分为3档,则总组合数Nt1可以等于2×4×3,即等于24。类似地,第j个绿色光源的发光强度的总档位数可以表示为Nj,其中,任意两个不同的绿色光源各自对应的Nj可以相等,也可以不相等,且任一绿色光源所对应的Nj可以大于或等于2。绿色光源的发光强度的组合方式与蓝色光源的组合方式是类似的,不再赘述。The above describes an embodiment where the luminous intensities of the blue light source and the green light source are respectively divided into two levels, but this is only an example. The total number of gears of the luminous intensity of the i-th blue light source can be expressed as N i , where the corresponding N i of any two different blue light sources can be equal or unequal, and any blue light source The corresponding N i may be greater than or equal to 2. For example, the luminous intensity of a certain blue light source can be divided into 3 levels, and the luminous intensity of another blue light source can be divided into 4 levels. The gears of the luminous intensity of all blue light sources can be combined. For example, assuming that the lighting system includes three blue light sources, the luminous intensity of the first blue light source is divided into 2 levels, the luminous intensity of the second blue light source is divided into 4 levels, and the luminous intensity of the third blue light source is divided into 4 levels. Divided into three levels, the total number of combinations N t1 can be equal to 2×4×3, that is, equal to 24. Similarly, the total number of gears of the luminous intensity of the jth green light source can be expressed as N j , where N j corresponding to any two different green light sources can be equal or unequal, and any green light source The corresponding N j may be greater than or equal to 2. The combination of the luminous intensity of the green light source is similar to the combination of the blue light source, and will not be repeated here.
可以理解,在目标发光时刻为上述第一时刻的情况下,选定蓝色发光方案为第一蓝色发光方案,选定绿色发光方案为第一绿色发光方案。在目 标发光时刻为上述第二时刻的情况下,选定蓝色发光方案为第二蓝色发光方案,选定绿色发光方案为第二绿色发光方案。在目标发光时刻为上述第三时刻的情况下,选定蓝色发光方案为第三蓝色发光方案,选定绿色发光方案为第三绿色发光方案。It can be understood that, when the target light-emitting time is the above-mentioned first time, the blue light-emitting scheme is selected as the first blue light-emitting scheme, and the green light-emitting scheme is selected as the first green light-emitting scheme. In the eyes When the marked lighting time is the above-mentioned second time, the blue lighting scheme is selected as the second blue lighting scheme, and the green lighting scheme is selected as the second green lighting scheme. When the target lighting time is the third time, the blue lighting scheme is selected as the third blue lighting scheme, and the green lighting scheme is selected as the third green lighting scheme.
根据本申请实施例,在蓝色光源组中存在发光强度不为0的蓝色光源的情况下采集的手纹图像包括至少一个第一手纹图像,至少一个第一手纹图像中任意两个第一手纹图像在采集时所采用的蓝色发光方案彼此不同,手纹采集方法还包括:从至少一个第一手纹图像中选择图像质量满足第一目标要求的第一手纹图像用于执行蓝绿融合操作。在绿色光源组中存在发光强度不为0的绿色光源的情况下采集的手纹图像包括至少一个第二手纹图像,至少一个第二手纹图像中任意两个第二手纹图像在采集时所采用的绿色发光方案彼此不同,手纹采集方法还包括:从至少一个第二手纹图像中选择图像质量满足第二目标要求的第二手纹图像用于执行蓝绿融合操作。According to an embodiment of the present application, when there is a blue light source whose luminous intensity is not 0 in the blue light source group, the handprint images collected include at least one first handprint image, and any two of the at least one first handprint image The blue light-emitting schemes adopted for the collection of the first handprint images are different from each other, and the handprint collection method further includes: selecting a first handprint image whose image quality meets the first target requirement from at least one first handprint image for use in Perform blue-green fusion operation. When there is a green light source whose luminous intensity is not 0 in the green light source group, the handprint images collected include at least one second handprint image, and any two second handprint images in the at least one second handprint image are collected The adopted green lighting schemes are different from each other, and the handprint collection method further includes: selecting a second handprint image whose image quality meets the second target requirement from at least one second handprint image for performing a blue-green fusion operation.
第一目标要求和第二目标要求可以相同,也可以不同。在一个或一些实施例中,可以对每个第一手纹图像进行图像质量评价,获得每个第一手纹图像的质量评分。第一目标要求可以包括第一手纹图像的质量评分大于第一质量评分阈值。第一质量评分阈值可以根据需要设定为任意大小。图像质量评价可以包括对第一手纹图像的清晰度、三维目标(用户的手的部分或全部)的被遮挡程度、三维目标的大小等中的一项或多项进行评价。在一个或一些实施例中,可以对每个第二手纹图像进行图像质量评价,获得每个第二手纹图像的质量评分。第二目标要求可以包括第二手纹图像的质量评分大于第二质量评分阈值。第二质量评分阈值可以根据需要设定为任意大小。图像质量评价可以包括对第二手纹图像的清晰度、三维目标(用户的手的部分或全部)的被遮挡程度、三维目标的大小等中的一项或多项进行评价。The first target requirement and the second target requirement may be the same or different. In one or some embodiments, image quality evaluation may be performed on each first handprint image to obtain a quality score of each first handprint image. The first target requirement may include that the quality score of the first handprint image is greater than a first quality score threshold. The first quality scoring threshold can be set to any size as required. The image quality evaluation may include evaluating one or more of the clarity of the first handprint image, the degree of occlusion of the three-dimensional object (part or all of the user's hand), the size of the three-dimensional object, and the like. In one or some embodiments, image quality evaluation may be performed on each second handprint image to obtain a quality score of each second handprint image. The second target requirement may include that the quality score of the second handprint image is greater than a second quality score threshold. The second quality scoring threshold can be set to any size as required. The image quality evaluation may include evaluating one or more of the clarity of the second handprint image, the degree of occlusion of the three-dimensional object (part or all of the user's hand), the size of the three-dimensional object, and the like.
例如,假设蓝色光源组共包括3个蓝色光源,每个蓝色光源的发光强度分为3档,则一共有27种蓝色发光方案,排除其中全部蓝色光源的发光强度均为0的情况,剩余26种蓝色发光方案。可以在采用这26种蓝色发光方案发光的情况下分别采集26个第一手纹图像,并从这26个第一手纹图像中选择图像质量满足第一目标要求的第一手纹图像用于执行蓝绿融合操作。对第二手纹图像的选择方式是类似的,不再赘述。For example, assuming that the blue light source group includes 3 blue light sources in total, and the luminous intensity of each blue light source is divided into 3 levels, there are a total of 27 blue luminous schemes, except that the luminous intensity of all blue light sources is 0 In this case, there are 26 remaining blue light-emitting schemes. In the case of using these 26 kinds of blue light-emitting schemes to emit light, 26 first handprint images can be collected respectively, and the first handprint image whose image quality meets the first target requirement can be selected from the 26 first handprint images. Used to perform blue-green fusion operations. The selection method for the second handprint image is similar and will not be repeated here.
根据本申请实施例,手纹采集方法还可以包括:执行蓝绿融合操作。蓝绿融合操作可以包括:对同一图像采集装置采集的同一手纹图像进行蓝色通道和绿色通道的提取,以获得蓝色通道图像和绿色通道图像;对蓝色通道图像和绿色通道图像进行融合,以获得该图像采集装置所对应的融合手纹图像。这种情况下,第一手纹图像和第二手纹图像可以是同一手纹图 像。According to the embodiment of the present application, the handprint collection method may further include: performing a blue-green fusion operation. The blue-green fusion operation may include: extracting the blue channel and the green channel from the same handprint image collected by the same image acquisition device to obtain the blue channel image and the green channel image; fusing the blue channel image and the green channel image to obtain the fused handprint image corresponding to the image acquisition device. In this case, the first handprint image and the second handprint image can be the same handprint picture.
图像采集装置采集到的原始彩色图像的基本通道有红、绿、蓝(RGB)三个,通过将图像处理成矩阵,即可获取对应通道的值。There are three basic channels of the original color image collected by the image acquisition device: red, green, and blue (RGB). By processing the image into a matrix, the value of the corresponding channel can be obtained.
分别提取绿色通道和蓝色通道的灰度图之后,可以将这两个通道的灰度图融合在一起。融合操作可以采用任意合适的融合方式实现,本文将在下面描述两种示例性的融合方式。After extracting the grayscale images of the green channel and the blue channel respectively, the grayscale images of these two channels can be fused together. The fusion operation may be implemented in any suitable fusion manner, and this paper will describe two exemplary fusion manners below.
根据本发明实施例,蓝色通道图像、绿色通道图像和融合手纹图像各自划分为m个单元,m为大于1的整数,According to the embodiment of the present invention, the blue channel image, the green channel image and the fused handprint image are each divided into m units, where m is an integer greater than 1,
对蓝色通道图像和绿色通道图像进行融合包括:The fusion of the blue channel image and the green channel image includes:
从蓝色通道图像或经进一步处理的蓝色通道图像的第i个单元以及绿色通道图像或经进一步处理的绿色通道图像的第i个单元中,选择像素值波动程度最大的单元;From the ith unit of the blue channel image or the further processed blue channel image and the ith unit of the green channel image or the further processed green channel image, selecting the unit whose pixel value fluctuates the most;
基于所选择的单元的像素值确定融合手纹图像的第i个单元的像素值;Determine the pixel value of the i-th unit of the fusion handprint image based on the pixel value of the selected unit;
其中,i=1,2,3……m,经进一步处理的蓝色通道图像是对蓝色通道图像进行进一步处理获得的图像,经进一步处理的绿色通道图像是对绿色通道图像进行进一步处理获得的图像。Among them, i=1,2,3...m, the further processed blue channel image is the image obtained by further processing the blue channel image, and the further processed green channel image is obtained by further processing the green channel image Image.
m可以根据需要设定为任何合适的值,本发明不对此进行限制。m个单元可以按照任意合适的方式划分。在一个示例中,m个单元可以按照像素划分,即每个像素划分为一个单元,图像有多少个像素就划分成多少个单元。在另一个示例中,将图像中的每多个像素划分为一个单元。m can be set to any suitable value as required, and the present invention is not limited thereto. The m units can be divided in any suitable manner. In one example, the m units can be divided into pixels, that is, each pixel is divided into a unit, and the image can be divided into as many units as there are pixels. In another example, every plurality of pixels in the image is divided into a unit.
像素值波动程度可以有多种表达方式,其主要体现图像中在当前单元处的像素变化的剧烈程度有多大。对于任一单元来说,在融合手纹图像上,总是尽量保留来自蓝色通道图像和绿色通道图像中像素值波动程度更剧烈的那个单元的像素信息。这样获得的融合手纹图像上的像素对比度强,便于后续在将手纹图像用于手纹识别时更有利于进行手纹之间的比对。The degree of pixel value fluctuation can be expressed in many ways, which mainly reflects how severe the pixel change at the current unit in the image is. For any unit, on the fused handprint image, always try to keep the pixel information from the unit whose pixel value fluctuates more violently in the blue channel image and the green channel image. The pixel contrast on the fused handprint image obtained in this way is strong, which is convenient for subsequent comparison between handprints when the handprint image is used for handprint recognition.
根据本发明实施例,m个单元中的每个单元仅包含单个像素,任一单元的像素值波动程度用该单元的像素值与周围像素的平均像素值之间的差的绝对值表示,进一步处理包括归一化,According to the embodiment of the present invention, each unit in the m units only includes a single pixel, and the fluctuation degree of the pixel value of any unit is represented by the absolute value of the difference between the pixel value of the unit and the average pixel value of surrounding pixels, further Processing includes normalization,
对蓝色通道图像和绿色通道图像进行融合包括:The fusion of the blue channel image and the green channel image includes:
将蓝色通道图像和绿色通道图像进行像素值归一化;Normalize the pixel values of the blue channel image and the green channel image;
针对第一通道图像中的第i个单元所包含的像素a1,计算该像素周围的M个像素的平均像素值E1,M为大于1的整数;For the pixel a 1 contained in the i-th unit in the first channel image, calculate the average pixel value E 1 of M pixels around the pixel, where M is an integer greater than 1;
针对第二通道图像中的第i个单元所包含的像素a2,计算该像素周围的M个像素的平均像素值E2For the pixel a 2 contained in the i-th unit in the second channel image, calculate the average pixel value E 2 of the M pixels around the pixel;
如果|F1-E1|>|F2-E2|,则确定F3=F1-E1+(E1+E2)/2;If |F 1 -E 1 |>|F 2 -E 2 |, then determine F 3 =F 1 -E 1 +(E 1 +E 2 )/2;
其中,第一通道图像和第二通道图像分别为经归一化的蓝色通道图像和经归一化的绿色通道图像之一,且第一通道图像和第二通道图像不同,F1为像素a1的像素值,F2为像素a2的像素值,F3为融合手纹图像中的第i个单元所包含的像素的像素值。Wherein, the first channel image and the second channel image are respectively one of the normalized blue channel image and the normalized green channel image, and the first channel image and the second channel image are different, F 1 is the pixel a 1 pixel value, F 2 is the pixel value of pixel a 2 , F 3 is the pixel value of the pixel contained in the i-th unit in the fused handprint image.
本实施例属于一种逐像素融合的方式。由于需要将不同通道的灰度图的像素值进行求和,因此在此之前可以首先将两个通道的灰度图进行归一化。归一化可以采用任何合适的归一化方式实现。This embodiment belongs to a pixel-by-pixel fusion method. Since the pixel values of the grayscale images of different channels need to be summed, the grayscale images of the two channels can be normalized first. Normalization can be achieved using any suitable normalization method.
M可以根据需要设定为任何合适的值,本发明不对此进行限制。下面以M=8为例描述本实施例。例如,可以从经归一化的蓝色通道图像中取像素a1的像素值F1,从经归一化的绿色通道图像中取与像素a1对应位置处的像素a2的像素值F2。随后,可以计算a1周围8个像素的平均像素值E1,并计算a2周围8个像素的平均像素值E2。如果|F1-E1|>|F2-E2|,则用F1-E1+(E1+E2)/2作为融合图像中的与a1或a2对应位置处的像素的最终像素值。本领域技术人员可以理解,对于周围像素数量少于8个的像素(例如图像左上角第一个像素),可以取该像素周围的所有像素的像素值来计算平均像素值。M can be set to any suitable value as required, and the present invention is not limited thereto. The following describes this embodiment by taking M=8 as an example. For example, the pixel value F1 of pixel a1 can be obtained from the normalized blue channel image, and the pixel value F of pixel a2 at the position corresponding to pixel a1 can be obtained from the normalized green channel image 2 . Subsequently, the average pixel value E 1 of the 8 pixels around a 1 can be calculated, and the average pixel value E 2 of the 8 pixels around a 2 can be calculated. If |F 1 -E 1 |>|F 2 -E 2 |, use F 1 -E 1 +(E 1 +E 2 )/2 as the pixel corresponding to a 1 or a 2 in the fused image The final pixel value of . Those skilled in the art can understand that for a pixel with less than 8 surrounding pixels (for example, the first pixel in the upper left corner of the image), the average pixel value can be calculated by taking the pixel values of all the surrounding pixels of the pixel.
根据本发明实施例,将蓝色通道图像和绿色通道图像进行像素值归一化包括:获取蓝色通道图像的所有像素值中位于特定百分比处的第一像素值和绿色通道图像的所有像素值中位于特定百分比处的第二像素值;将第一像素值和第二像素值统一为同一特定像素值;基于第一像素值和特定像素值之间的比例将蓝色通道图像中的所有像素值进行等比例缩放,以获得经归一化的蓝色通道图像;基于第二像素值和特定像素值之间的比例将绿色通道图像中的所有像素值进行等比例缩放,以获得经归一化的绿色通道图像。According to an embodiment of the present invention, normalizing the pixel values of the blue channel image and the green channel image includes: obtaining the first pixel value at a specific percentage of all pixel values of the blue channel image and all pixel values of the green channel image the second pixel value at a specific percentage in ; unify the first and second pixel values to the same specific pixel value; unify all pixels in the blue channel image based on the ratio between the first pixel value and the specific pixel value value to obtain a normalized blue channel image; all pixel values in the green channel image are scaled based on the ratio between the second pixel value and the specified pixel value to obtain a normalized Optimized green channel image.
特定百分比可以是任何合适的百分比值,本文不对此进行限制。在一个示例中,特定百分比是20%。下面以此为例,描述归一化的方式。例如,可以从蓝色通道图像中取前20%分位处的像素值,获得第一像素值,并从绿色通道图像中取前20%分位处的像素值,获得第二像素值。可以理解,上述前20%分位是指在整张图像的所有像素值中像素值大小位于20%位置处的像素值。The particular percentage can be any suitable percentage value, without limitation herein. In one example, the specific percentage is 20%. The following takes this as an example to describe the way of normalization. For example, the first pixel value can be obtained by taking the pixel value at the top 20% quantile from the blue channel image, and the second pixel value can be obtained by taking the pixel value at the top 20% quantile from the green channel image. It can be understood that the above-mentioned top 20% quantile refers to the pixel value whose pixel value is at the 20% position among all the pixel values of the entire image.
随后,可以将第一像素值和第二像素值统一为相等的特定像素值。例如,假设第一像素值为200,第二像素值为100,则可以将第一像素值缩小为原来的一半变为100,从而使得第一像素值和第二像素值统一为100。随后,将蓝色通道图像中的所有像素值都缩小为原来的一半,绿色通道图像中的像素不变,这样就可以完成两个通道的归一化。Subsequently, the first pixel value and the second pixel value may be unified to an equal specific pixel value. For example, assuming that the first pixel value is 200 and the second pixel value is 100, the first pixel value can be reduced to half of its original value to 100, so that the first pixel value and the second pixel value are uniformly 100. Then, all the pixel values in the blue channel image are reduced to half of the original value, and the pixels in the green channel image are unchanged, so that the normalization of the two channels can be completed.
当然,上述将蓝色通道图像的像素值缩小而绿色通道图像的像素值不 变的实施例仅是示例而非对本发明的限制。本发明也可以采用其他合适的归一化方式。例如,蓝色通道图像的像素值不变而绿色通道图像的像素值放大,或者蓝色通道图像和绿色通道图像的像素值均发生变化,这些都是可行的。Of course, the above scales down the pixel values of the blue channel image while the pixel values of the green channel image do not The variant embodiments are only examples and do not limit the present invention. The present invention can also adopt other suitable normalization methods. For example, it is possible that the pixel values of the blue channel image are unchanged and the pixel values of the green channel image are enlarged, or that the pixel values of both the blue channel image and the green channel image are changed.
根据本发明实施例,m个单元中的每个单元包含多个像素,任一单元的像素值波动程度用该单元的均方差或梯度表示,对蓝色通道图像和绿色通道图像进行融合包括:计算蓝色通道图像中的m个单元中的每个单元的均方差或梯度;计算绿色通道图像中的m个单元中的每个单元的均方差或梯度;从蓝色通道图像中的第i个单元和绿色通道图像中的第i个单元中,选择均方差或梯度更大的单元;将所选择的单元所包含的像素的像素值确定为融合手纹图像中的第i个单元所包含的像素的像素值。According to an embodiment of the present invention, each of the m units contains a plurality of pixels, and the fluctuation degree of the pixel value of any unit is represented by the mean square error or gradient of the unit, and the fusion of the blue channel image and the green channel image includes: Calculate the mean square error or gradient of each of the m units in the blue channel image; calculate the mean square error or gradient of each of the m units in the green channel image; from the i-th unit in the blue channel image In the ith unit in the unit and the green channel image, select the unit with a larger mean square error or gradient; determine the pixel value of the pixel contained in the selected unit as contained in the ith unit in the fusion handprint image The pixel value of the pixel.
除逐像素融合以外,还可以分块融合。例如,针对图像划分出的每个单元可以包含多个像素。对于蓝色通道图像和绿色通道图像中对应的块,可以计算各自块的均方差或梯度等指标,从蓝色通道图像和绿色通道图像的对应块中选择块内均方差或梯度大的块作为融合手纹图像中的最终的块。In addition to pixel-by-pixel fusion, block fusion is also possible. For example, each cell divided for an image may contain a plurality of pixels. For the corresponding blocks in the blue channel image and the green channel image, the mean square error or gradient of the respective blocks can be calculated, and the block with the largest mean square error or gradient in the block is selected from the corresponding blocks of the blue channel image and the green channel image as The final blocks in the handprint image are fused.
根据本发明实施例,方法还包括:对融合手纹图像中的相邻单元的交界处的像素值进行平滑处理,以获得经更新的融合手纹图像。According to an embodiment of the present invention, the method further includes: smoothing the pixel values at the junction of adjacent units in the fused handprint image to obtain an updated fused handprint image.
块之间的交界处可以做平滑处理,否则边界会比较突兀。平滑处理可以是任何合适的平滑处理方式,本文不对此进行限制。例如,可以对两个或更多个块的交界处的预设数目的像素的像素值进行平均运算,将平均值作为该预设数目的像素的新像素值。The junction between blocks can be smoothed, otherwise the border will be more abrupt. The smoothing process may be any suitable smoothing process, which is not limited herein. For example, an average operation may be performed on pixel values of a preset number of pixels at the junction of two or more blocks, and the average value may be used as a new pixel value of the preset number of pixels.
示例性地,手纹采集方法还可以包括:对于一个或多个图像采集装置中的每一个,执行上述蓝绿融合操作;基于一个或多个图像采集装置各自对应的融合手纹图像获得整体手纹图像。Exemplarily, the handprint collection method may further include: for each of the one or more image collection devices, performing the above-mentioned blue-green fusion operation; pattern image.
将任一图像采集装置的蓝色通道图像和绿色通道图像融合可以获得其对应的融合手纹图像。示例性地,在图像采集装置的数目为一个的情况下,可以将该图像采集装置对应的融合手纹图像作为整体手纹图像。示例性地,在图像采集装置的数目为多个的情况下,可以将多个图像采集装置对应的融合手纹图像拼接在一起,获得整体手纹图像。整体手纹图像可以用于后续的手纹识别等目的。The corresponding fused handprint image can be obtained by fusing the blue channel image and the green channel image of any image acquisition device. Exemplarily, when the number of the image acquisition device is one, the fused handprint image corresponding to the image acquisition device may be used as the overall handprint image. Exemplarily, when there are multiple image acquisition devices, the fused handprint images corresponding to the multiple image acquisition devices may be spliced together to obtain an overall handprint image. The overall handprint image can be used for subsequent purposes such as handprint recognition.
现有的非接触式采集装置大多为了采集单指指纹而设计,没有针对例如四指指纹、双拇指指纹、平掌掌纹和侧掌掌纹等的需要采集面积较大的非接触式采集装置。如果直接利用现有的用于采集单指指纹的非接触式采集装置采集前述的一种或多种大面积手纹,会出现期望采集的手纹采集不全、或者边缘视场的成像品质低于中心视场、或者由于采集精度需求不同而导致采集效果不理想。 Most of the existing non-contact collection devices are designed to collect single-finger fingerprints, and there are no non-contact collection devices that require large collection areas such as four-finger fingerprints, double-thumb fingerprints, flat palm palmprints, and side palm palmprints. . If the existing non-contact collection device for collecting single-fingerprints is directly used to collect one or more of the aforementioned large-area handprints, the desired collection of handprints will be incomplete, or the imaging quality of the peripheral field of view will be lower than The central field of view, or the acquisition effect is not ideal due to different acquisition accuracy requirements.
本申请实施例提供了一种手纹采集系统。参照图18,手纹采集系统包括结构光投射器100、多个第四图像采集装置1820以及处理装置(未示出)。本文描述的手纹采集系统可以采集包括但不限于单指指纹、双拇指指纹、四指指纹、平掌掌纹以及侧掌掌纹等的手纹,尤其适用于采集大面积的手纹。结合参照图19,结构光投射器100可以对准手纹采集区,用于向手纹采集区发射结构光。图19中虚线矩形框A示例性地标示出了手纹采集区。期望被采集的手纹可以面向结构光投射器100和多个第四图像采集装置1820。虽然图中示出的手纹采集区位于结构光投射器100和多个第四图像采集装置1820的上方,但是在未示出的其他实施例中,手纹采集区也可以位于结构光投射器100和多个第四图像采集装置1820的前侧。The embodiment of the present application provides a handprint collection system. Referring to FIG. 18 , the handprint collection system includes a structured light projector 100 , a plurality of fourth image collection devices 1820 and a processing device (not shown). The handprint collection system described in this article can collect handprints including but not limited to single-fingerprints, double-thumbprints, four-fingerprints, flat-palm palmprints and side-palm palmprints, and is especially suitable for collecting large-area palmprints. Referring to FIG. 19 , the structured light projector 100 can be aimed at the handprint collection area, and is used for emitting structured light to the handprint collection area. The dotted rectangular box A in FIG. 19 exemplarily marks the handprint collection area. The handprints expected to be collected may face the structured light projector 100 and the plurality of fourth image collection devices 1820 . Although the handprint collection area shown in the figure is located above the structured light projector 100 and a plurality of fourth image acquisition devices 1820, in other embodiments not shown, the handprint collection area may also be located on the structured light projector 100 and the front side of a plurality of fourth image capture devices 1820 .
示例性地,结构光投射器100投射的结构光可以是菲林投影结构光。结构光投射器100可以位于更靠近手掌根部的一侧。这样,可以避免结构光投射器100占据整个采集装置的中心区域,中心区域内可以布置图像采集装置,例如多个第四图像采集装置1820、下文将提到的多个第五图像采集装置1830和/或下文将提到的第六图像采集装置1840。上述结构光投射器100的布置方式仅是示例,其也可以设置在中心区域之外的外围区域内的任何位置处。如果允许的话,结构光投射器100也可以设置在中心区域内。当结构光投射器100设置在外围区域时,需要结构光的投射方向与采集表面(手纹表面)成一定角度。这样才能够令结构光的投射范围覆盖整个手纹采集区。Exemplarily, the structured light projected by the structured light projector 100 may be film projection structured light. The structured light projector 100 may be located on the side closer to the base of the palm. In this way, the structured light projector 100 can be prevented from occupying the central area of the entire acquisition device, and image acquisition devices can be arranged in the central area, such as a plurality of fourth image acquisition devices 1820, a plurality of fifth image acquisition devices 1830 and /or the sixth image acquisition device 1840 to be mentioned below. The arrangement of the above-mentioned structured light projector 100 is only an example, and it can also be arranged at any position in the peripheral area other than the central area. If allowed, the structured light projector 100 can also be arranged in the central area. When the structured light projector 100 is arranged in the peripheral area, it is required that the projection direction of the structured light and the collection surface (handprint surface) form a certain angle. Only in this way can the projection range of the structured light cover the entire handprint collection area.
由于结构光的投射方向与采集表面成一定角度,为了避免采集表面的最远端和最近端分别在投影像平面的两侧,而会出现结构光失焦的问题,可以将图案产生单元与会聚透镜的光轴成一定角度摆放,而不是垂直摆放,使得图案产生单元上距离会聚透镜越近的点在采集表面上形成的投影点距离会聚透镜越远。这样,可以使图案产生单元上的图案经会聚透镜折射后形成的像平面与采集表面尽量重合,实现投影图案尽可能的清晰。这样,可以避免结构光失焦,可以进行高精度的三维重构场景。Since the projection direction of the structured light is at a certain angle to the collection surface, in order to avoid the problem that the farthest end and the nearest end of the collection surface are on both sides of the projected image plane, and the structured light is out of focus, the pattern generation unit can be combined with the converging The optical axis of the lens is arranged at a certain angle, rather than being arranged vertically, so that the point on the pattern generating unit closer to the converging lens forms a projected point on the collecting surface that is farther away from the converging lens. In this way, the image plane formed by the pattern on the pattern generating unit refracted by the converging lens can overlap with the collection surface as much as possible, so that the projected pattern can be as clear as possible. In this way, the out-of-focus of structured light can be avoided, and high-precision three-dimensional reconstruction of the scene can be performed.
由于结构光投射器100朝向手纹采集区的中心倾斜放置,因此,可以将结构光投射器100设置在靠近用户的一侧,例如靠近手掌根部的一侧,避免光源发出的光照射到用户。Since the structured light projector 100 is placed obliquely towards the center of the handprint collection area, the structured light projector 100 can be placed on the side close to the user, such as the side close to the root of the palm, to avoid the light emitted by the light source from irradiating the user.
多个第四图像采集装置1820的视场范围可以覆盖整个手纹采集区,方便采集到整个手纹采集区的图像。用户将手指或/或手掌放置在手纹采集区之后,多个第四图像采集装置1820至少可以清楚地采集到用户的面向多个第四图像采集装置1820的指纹和/或掌纹图像。The field of view of the plurality of fourth image capture devices 1820 can cover the entire handprint collection area, so as to facilitate the collection of images of the entire handprint collection area. After the user places a finger or/or palm on the palmprint collection area, the multiple fourth image capture devices 1820 can at least clearly capture the user's fingerprint and/or palmprint images facing the multiple fourth image capture devices 1820 .
示例性地,多个第四图像采集装置1820的镜头可以位于预定平面内,且多个第四图像采集装置1820可以分别对准手纹采集区内的多个第一子区域。多个第四图像采集装置1820之间可以彼此间隔一定距离。但是,多 个第一子区域中的任意相邻的两个彼此重叠或者邻接,使得多个第四图像采集装置1820相互配合采集整个手纹采集区内的图像。多个第四图像采集装置1820的光轴之间可以相互平行设置。即多个第四图像采集装置1820以相同的角度分别采集多个第一子区域内的图像。参照图18,本申请实施例中的第四图像采集装置1820为三个,彼此间隔相同的距离设于预定平面内。可以理解的是,在未示出的实施例中,第一图形采集装置的数量可以为任意,具体可以根据手纹采集系统的不同的使用需求以及第四图像采集装置的不同参数作相应的设置。多个第四图像采集装置1820可以分别用于对多个第一子区域进行图像采集以获得多个第一目标图像。具体地,获得的多个第一目标图像可以分别对应被采集的用户的指纹和/或掌纹的不同部分的图像,也就是说,相邻的两个第一子区域之间是邻接的,而非重叠的。或者,多个第一目标图像之间可以具有部分区域的重叠。处理装置可以对多个第一目标图像进行处理以获得第一类型手纹的图像。第一类型手纹可以包括四指指纹、双拇指纹、平掌掌纹和侧掌掌纹中至少一者。具体地,多个第一目标图像可以是手纹图像的子集。本申请设置有多个第四图像采集装置1820,分别对多个第一子区域进行图像采集,可以更具有针对性地获得多个第一目标图像,保证第一目标图像的清晰度。处理装置可以对多个第一目标图像进行展开、拼接等处理,处理后获得的手纹图像为多个第一目标图像的组合。具体地,本申请中处理装置可以将获得的多个第一目标图像进行处理,处理时可以将多个第一目标图像中成像较好的部分进行拼接并获取完整的手纹图像。这样,处理后获得的手纹图像中各部分的均具有较好清晰度,成像品质更高。并且,多个第四图像采集装置1820在结构光的配合下,可以获取所采集的单指、双拇指、四指、平掌或侧掌的三维形貌数据,进行展开处理,消除手纹的表面的起伏影响,获取高质量的平面纹理图像。Exemplarily, the lenses of the multiple fourth image capture devices 1820 may be located in a predetermined plane, and the multiple fourth image capture devices 1820 may be respectively aimed at multiple first sub-regions in the handprint capture area. The plurality of fourth image capture devices 1820 may be spaced apart from each other by a certain distance. However, much Any two adjacent ones of the first sub-areas overlap or adjoin each other, so that multiple fourth image capture devices 1820 cooperate with each other to capture images in the entire handprint capture area. The optical axes of the multiple fourth image capture devices 1820 may be arranged parallel to each other. That is, the plurality of fourth image acquisition devices 1820 respectively acquire images in the plurality of first sub-regions at the same angle. Referring to FIG. 18 , there are three fourth image acquisition devices 1820 in the embodiment of the present application, and they are arranged in a predetermined plane at the same distance from each other. It can be understood that, in an unshown embodiment, the number of the first image acquisition device can be arbitrary, and can be specifically set according to different usage requirements of the handprint acquisition system and different parameters of the fourth image acquisition device . The plurality of fourth image acquisition devices 1820 may be respectively used to acquire images of the plurality of first sub-regions to obtain the plurality of first target images. Specifically, the obtained multiple first target images may respectively correspond to images of different parts of the collected user's fingerprint and/or palmprint, that is to say, two adjacent first sub-regions are adjacent, rather than overlapping. Alternatively, there may be partial overlap between the multiple first target images. The processing device may process the plurality of first target images to obtain images of the first type of handprint. The first type of palmprints may include at least one of four-fingerprints, double-thumbprints, flat-palm palmprints, and side-palm palmprints. Specifically, the plurality of first target images may be a subset of handprint images. The present application is provided with a plurality of fourth image acquisition devices 1820 to respectively acquire images of a plurality of first sub-regions, so as to obtain a plurality of first target images in a more targeted manner and ensure the clarity of the first target images. The processing device may perform processing such as expansion and splicing on the multiple first target images, and the obtained handprint image after processing is a combination of the multiple first target images. Specifically, in the present application, the processing device may process the obtained multiple first target images, and during processing, may splice better imaged parts of the multiple first target images to obtain a complete handprint image. In this way, each part of the handprint image obtained after processing has better definition and higher imaging quality. In addition, with the cooperation of structured light, multiple fourth image acquisition devices 1820 can acquire the collected three-dimensional shape data of single finger, double thumb, four fingers, flat palm or side palm, and perform unfolding processing to eliminate the blurring of hand lines. Obtain high-quality planar texture images due to surface undulations.
本申请实施例的手纹采集系统可以是非接触式的,也即用户将手的部分或全部悬停在手纹采集区即可进行采集,而不必须该采集装置接触。这样,可以避免由于接触式采集所带来的卫生风险高、采集质量低、采集面积小、对皮肤的干湿敏以及采集一致性低的问题。或者,可选地,该手纹采集系统也可以例如运行于智能设备上的通过手与屏幕接触来实现手纹采集,在此情况下该手纹采集系统并未与手接触,接触的是屏幕,当然,也可以在不接触屏幕的情况下利用该手纹采集系统采集手纹图像。The handprint collection system of the embodiment of the present application may be non-contact, that is, the user can collect part or all of the hand over the handprint collection area without touching the collection device. In this way, the problems of high hygiene risk, low collection quality, small collection area, sensitivity to dryness and humidity of the skin, and low collection consistency caused by contact collection can be avoided. Or, optionally, the handprint collection system can also, for example, run on a smart device and realize handprint collection by touching the screen with the hand. In this case, the handprint collection system is not in contact with the hand, but the screen. Of course, the handprint collection system can also be used to collect handprint images without touching the screen.
本申请实施例提供的该手纹采集系统利用镜头位于同一平面内的多个第四图像采集装置分别采集手纹采集区内的不同的多个第一子区域的图像,并通过处理装置对这些图像进行处理可以获得完整的手纹图像,因此针对大面积的手纹进行采集,尤其适用于采集四指指纹、双拇指指纹、平掌掌纹和侧掌掌纹的图像。当然,也可以利用该手纹采集系统采集单指指纹的图像,在此情况下,手指可以放置在手纹采集区内的任何位置处,用 户的体验更好。无论是针对大面积的手纹采集还是单指较随意地摆放情况下的手纹采集,都能够获得完整且高质量的手纹图像。The handprint collection system provided in the embodiment of the present application utilizes multiple fourth image collection devices whose lenses are located in the same plane to respectively collect images of different multiple first sub-regions in the handprint collection area, and process these images through the processing device Image processing can obtain a complete handprint image, so the collection of large-area handprints is especially suitable for collecting images of four-fingerprints, double-thumbprints, flat palms and side palms. Certainly, also can utilize this handprint collecting system to gather the image of single-finger fingerprint, in this case, finger can be placed at any position in the handprint collecting area, with better customer experience. Regardless of whether it is for large-area handprint collection or for handprint collection with a single finger placed more casually, a complete and high-quality handprint image can be obtained.
示例性地,多个第四图像采集装置1820中的每个到手纹采集区的距离在72mm-82mm之间。这样,可以保证手纹采集区处于第四图像采集装置1820成像效果较好的范围内,保证多个第一目标图像的清晰度,进而保证手纹图像的成像质量。进一步地,多个第四图像采集装置1820中的每个到手纹采集区域的距离均可以为76mm,以达到更好的成像效果。Exemplarily, the distance from each of the plurality of fourth image capture devices 1820 to the handprint capture area is between 72mm-82mm. In this way, it can be ensured that the handprint collection area is within the range of better imaging effect of the fourth image collection device 1820, the definition of multiple first target images can be ensured, and the imaging quality of the handprint images can be ensured. Further, the distance between each of the plurality of fourth image capture devices 1820 and the handprint capture area may be 76mm, so as to achieve better imaging effect.
示例性地,多个第四图像采集装置1820中的每个的焦距可以在4-4.2mm之间。焦距可以理解为第四图像采集装置1820的镜头中透镜的焦点与其后主平面的距离。焦距的大小可以影响在被采集的目标在采集的画面中所占的比例。焦距如此设置可以进一步保证手纹采集区处于第四图像采集装置1820的成像范围内,并使手纹的图像在第一目标图像中所占的比例更适中。Exemplarily, the focal length of each of the plurality of fourth image capture devices 1820 may be between 4-4.2mm. The focal length can be understood as the distance between the focal point of the lens in the lens of the fourth image acquisition device 1820 and the rear main plane. The size of the focal length can affect the proportion of the captured target in the captured picture. Setting the focal length in this way can further ensure that the handprint collection area is within the imaging range of the fourth image collection device 1820, and make the proportion of the handprint image in the first target image more moderate.
示例性地,多个第四图像采集装置1820中的每个的光圈数不小于4。光圈数又称F数,是每个相机的相机镜头的焦距与相机镜头的入射光瞳的直径的比率。大的光圈数对应于小光圈,增加了景深,因此增加了手指的移动空间,同时仍然保证成像的清晰度。本申请中多个第四图像采集装置1820中的每个的光圈数不小于4,保证被采集的手指或手掌移动的空间的大小,在该范围内手纹采集系统均可以采集到清晰的手纹图像,提升用户使用体验。进一步地,多个第四图像采集装置1820中的每个的光圈数可以不小于5.6。Exemplarily, the f-number of each of the plurality of fourth image capture devices 1820 is not less than 4. The aperture number, also known as the F-number, is the ratio of the focal length of the camera lens of each camera to the diameter of the entrance pupil of the camera lens. Larger f-numbers correspond to smaller apertures, increasing depth of field and therefore room for finger movement, while still maintaining sharpness in imaging. In the present application, the aperture number of each of the multiple fourth image acquisition devices 1820 is not less than 4, so as to ensure the size of the space for the movement of the collected fingers or palms, and within this range, the handprint collection system can collect clear hand prints. Texture images to improve user experience. Further, the f-number of each of the plurality of fourth image capture devices 1820 may not be less than 5.6.
示例性地,多个第四图像采集装置1820中的每个的景深不小于20mm。这样,可以更好地适应手指或手掌的厚度以及表面的起伏的高度差,保证成像质量。Exemplarily, the depth of field of each of the plurality of fourth image capture devices 1820 is not less than 20mm. In this way, the thickness of the finger or palm and the height difference of surface fluctuations can be better adapted to ensure the imaging quality.
示例性地,多个第四图像采集装置1820的光轴可以位于垂直于预定平面的第一平面内。即多个第四图像采集装置1820的镜头之间相互平行设置,且依次排布于同一直线上。结构设置更紧凑,便于安装。并且,可以有效防止多个第四图像采集装置1820之间的视野遮挡。Exemplarily, the optical axes of the plurality of fourth image capture devices 1820 may be located in a first plane perpendicular to the predetermined plane. That is, the lenses of the plurality of fourth image capture devices 1820 are arranged in parallel with each other, and are sequentially arranged on the same straight line. The structure setting is more compact and easy to install. Moreover, it is possible to effectively prevent the view field blocking among the multiple fourth image capture devices 1820 .
示例性地,多个第四图像采集装置1820中的每个的视场范围可以大于或等于125×70mm。其中,多个第四图像采集装置1820可以在125mm所对应的长边上相邻接或者重叠。具体地,以图中所示的具有三个第四图像采集装置1820为例,这三个第四图像采集装置1820的视场范围的长边可以相互邻接,由此最大可以组成125×210mm的视场范围,以适应大部分应用场景。这样的视场范围可以保证多个第四图像采集装置1820组合后的视角可以覆盖足够大的手纹采集区,保证手纹图像的完整度。Exemplarily, the field of view of each of the plurality of fourth image capture devices 1820 may be greater than or equal to 125×70 mm. Wherein, a plurality of fourth image capture devices 1820 may be adjacent to or overlapped on a long side corresponding to 125 mm. Specifically, taking the three fourth image acquisition devices 1820 shown in the figure as an example, the long sides of the field of view ranges of the three fourth image acquisition devices 1820 can be adjacent to each other, thus forming a maximum 125×210 mm Field of view to suit most application scenarios. Such a field of view can ensure that the combined viewing angles of multiple fourth image acquisition devices 1820 can cover a sufficiently large handprint collection area and ensure the integrity of the handprint image.
示例性地,多个第四图像采集装置1820中的每个的对角线视场可以在80°-82°之间。这样,在多个第四图像采集装置1820在经过合理的布设 后,可以更全面地覆盖第一子区域。防止出现不能够被采集的死角区域,进一步保证采集的手纹图像的完整度。Exemplarily, the diagonal field of view of each of the plurality of fourth image capture devices 1820 may be between 80°-82°. In this way, after a plurality of fourth image acquisition devices 1820 are arranged reasonably After that, the first sub-region can be covered more comprehensively. Prevent the occurrence of dead angle areas that cannot be collected, and further ensure the integrity of the collected handprint images.
示例性地,多个第四图像采集装置1820中的每个的工作波长可以与可见光的波长相对应。这样,第四图像采集装置1820可以直接采集手纹采集区的可见光的信息,手纹采集系统的适应范围更广。示例性地,多个第四图像采集装置1820中的每个可以不包含红外滤光片。无需设置红外滤光片,即可获得较清晰的手纹图像,结构设置更简单合理。Exemplarily, the working wavelength of each of the plurality of fourth image capture devices 1820 may correspond to the wavelength of visible light. In this way, the fourth image collection device 1820 can directly collect the visible light information of the handprint collection area, and the application range of the handprint collection system is wider. Exemplarily, each of the plurality of fourth image capture devices 1820 may not include an infrared filter. Clearer handprint images can be obtained without setting infrared filters, and the structure setting is simpler and more reasonable.
示例性地,多个第四图像采集装置1820中的每个的分辨率可以不小于500万像素。这样的分辨率的设置,可以保证多个第一目标图像中的每个的清晰度,进而保证手纹图像的整体的清晰度。Exemplarily, the resolution of each of the plurality of fourth image capture devices 1820 may be no less than 5 million pixels. Such setting of resolution can ensure the sharpness of each of the plurality of first target images, thereby ensuring the overall sharpness of the handprint image.
示例性地,多个第四图像采集装置1820中的每个的光学畸变可以小于5%。光学畸变是光学系统对物体所称的像相对于物体本身的失真程度。本申请中多个第四图像采集装置1820中的每个的光学畸变小于5%,可以更好地保证每个第一目标图像的真实度,进而保证手纹图像的真实度。Exemplarily, the optical distortion of each of the plurality of fourth image capture devices 1820 may be less than 5%. Optical distortion is the degree of distortion of the image called by the optical system to the object relative to the object itself. In this application, the optical distortion of each of the plurality of fourth image acquisition devices 1820 is less than 5%, which can better ensure the authenticity of each first target image, and further ensure the authenticity of the handprint image.
示例性地,第四图像采集装置1820的镜头的接口可以为M12接口。这样,镜头尺寸更小,结构更紧凑。Exemplarily, the interface of the lens of the fourth image acquisition device 1820 may be an M12 interface. In this way, the lens size is smaller and the structure is more compact.
示例性地,参照图18,手纹采集系统还可以包括多个第五图像采集装置1830。多个第五图像采集装置1830中的至少一部分的镜头位于不同平面内且对准手纹采集区内的多个第二子区域。这样,这些第五图像采集装置1830可以更好地获取位于不同平面上的手纹信息,从而获得全单指指纹图像。全单指指纹包括单指的指肚正面和两个侧面的指纹。多个第五图像采集装置1830分别用于对多个第二子区域进行图像采集以获得多个第二目标图像,处理装置还用于对多个第二目标图像进行处理以获得第二类型手纹的图像。第二类型手纹可以包括全单指指纹。其中,多个第二子区域可以彼此位于不同的平面内。进一步地,多个第二子区域可以与多个第一子区域在空间上间隔开。或者,多个第二区域可以与多个第一子区域具有部分重叠。多个第五图像采集装置1830可以用于采集指纹的3D图像,多个第二子区域可以分别对应手指的正面以及侧面区域。处理装置可以将多个第二子区域的指纹图像拼接到3D指纹模型的3D指纹表面上以获得3D指纹图像。第五图像采集装置1830与结构光投射器的配合,获得的指纹面积远大于传统的通过接触获得的指纹面积,采集质量更高。当需要高质量、全单指指纹时,只需要将该手指放置在图像采集区内悬停片刻,就可以获得高清晰度且完整的单指指纹图像。无需要手指在采集装置上从一侧滚动到另一侧以增加总接触面积。本方案对于不顺从的个体尤其适用,不会出现指纹图像模糊的情况。Exemplarily, referring to FIG. 18 , the handprint collection system may further include a plurality of fifth image collection devices 1830 . The lenses of at least a part of the multiple fifth image capture devices 1830 are located in different planes and aligned with multiple second sub-regions in the handprint capture area. In this way, these fifth image acquisition devices 1830 can better acquire handprint information on different planes, so as to obtain a full single-fingerprint image. Full single-finger fingerprints include fingerprints on the front and two sides of the pad of a single finger. The plurality of fifth image acquisition devices 1830 are respectively used to acquire images of the plurality of second sub-regions to obtain a plurality of second target images, and the processing device is also used to process the plurality of second target images to obtain a second type of target image. patterned image. The second type of fingerprints may include full single-finger prints. Wherein, the plurality of second sub-regions may be located in different planes from each other. Further, the plurality of second sub-regions may be spatially separated from the plurality of first sub-regions. Alternatively, the plurality of second regions may have partial overlap with the plurality of first subregions. Multiple fifth image capture devices 1830 may be used to capture 3D images of fingerprints, and multiple second sub-regions may correspond to the front and side regions of the finger, respectively. The processing device may stitch the fingerprint images of the plurality of second sub-regions onto the 3D fingerprint surface of the 3D fingerprint model to obtain a 3D fingerprint image. With the cooperation of the fifth image acquisition device 1830 and the structured light projector, the area of the fingerprint obtained is much larger than that obtained by traditional contact, and the acquisition quality is higher. When a high-quality, full single-fingerprint is required, you only need to place the finger in the image acquisition area and hover for a moment to obtain a high-definition and complete single-fingerprint image. There is no need to roll a finger from side to side on the capture device to increase the total contact area. This solution is especially suitable for individuals who are not compliant, and there will be no blurring of fingerprint images.
多个第五图像采集装置1830可以是或者包括上述第一图像采集装置、第二图像采集装置和第三图像采集装置。 The plurality of fifth image capture devices 1830 may be or include the above-mentioned first image capture device, second image capture device and third image capture device.
该手纹采集系统包括了多个第四图像采集装置1820和多个第五图像采集装置1830,如果需要采集高质量的单指指纹可以利用多个第五图像采集装置1830进行采集,如果需要采集大面积的指纹或者对单指的侧面指纹要求不高的情况下,可以利用多个第四图像采集装置1820进行采集。多个第四图像采集装置1820和多个第五图像采集装置1830可以不同时工作,结构光投射器100可以在多个第四图像采集装置1820工作时和多个第五图像采集装置1830工作时都投射结构光。由此可以满足单指、四指、双拇指、平掌以及侧掌等多种采集需求,使得该手纹采集系统的应用范围更广。The handprint collection system includes a plurality of fourth image collection devices 1820 and a plurality of fifth image collection devices 1830. If it is necessary to collect high-quality single-fingerprints, multiple fifth image collection devices 1830 can be used for collection. In the case of a large-area fingerprint or a single-finger side fingerprint, a plurality of fourth image capture devices 1820 may be used for capture. A plurality of fourth image acquisition devices 1820 and a plurality of fifth image acquisition devices 1830 may not work at the same time, and the structured light projector 100 may operate when a plurality of fourth image acquisition devices 1820 and a plurality of fifth image acquisition devices 1830 are in operation Both cast structured light. Therefore, it can meet various collection requirements such as single finger, four fingers, double thumbs, flat palm and side palm, which makes the application range of the handprint collection system wider.
示例性地,参照图20,多个第五图像采集装置1830可以包括中间第五图像采集装置1833、左侧第五图像采集装置1831和右侧第五图像采集装置1832。其中,左侧第五图像采集装置1831位于中间第五图像采集装置1833的左侧。右侧第五图像采集装置1832位于中间第五图像采集装置1833的右侧。左侧第五图像采集装置1831和右侧第五图像采集装置1832的镜头的光轴以设定角度倾斜,分别朝向相应的第二子区域。示例性地,左侧第五图像采集装置1831和右侧第五图像采集装置1832两者可以对称设于中间第二采集装置1833的两侧。第五图像采集装置1830工作时,可以分别采集目标的中间、左侧以及右侧的区域的图像。这种设置尤其适合单指的采集,第二子区域可以包括手指的指肚正面、左侧面以及右侧面的指纹,可以得到更为清晰精准的单指的3D指纹图像。Exemplarily, referring to FIG. 20 , the plurality of fifth image capture devices 1830 may include a middle fifth image capture device 1833 , a left fifth image capture device 1831 and a right fifth image capture device 1832 . Wherein, the fifth image capture device 1831 on the left is located on the left side of the fifth image capture device 1833 in the middle. The fifth image capture device 1832 on the right is located on the right side of the fifth image capture device 1833 in the middle. The optical axes of the lenses of the left fifth image capture device 1831 and the right fifth image capture device 1832 are tilted at a set angle, respectively facing the corresponding second sub-regions. Exemplarily, the left fifth image capture device 1831 and the right fifth image capture device 1832 may be symmetrically arranged on both sides of the middle second capture device 1833 . When the fifth image capture device 1830 is working, it can capture images of the middle, left and right regions of the target respectively. This setting is especially suitable for single-finger collection. The second sub-region can include the fingerprints on the front, left and right sides of the pad of the finger, so that a clearer and more accurate 3D fingerprint image of a single finger can be obtained.
示例性地,中间第五图像采集装置1833的镜头可以位于预定平面内。这样,中间第五图像采集装置1833与多个第四图像采集装置1820设于同一平面内,结构更紧凑,并且防止第五图像采集装置1833与多个第四图像采集装置1820之间的视野遮挡。Exemplarily, the lens of the middle fifth image capture device 1833 may be located in a predetermined plane. In this way, the fifth image capture device 1833 in the middle and the multiple fourth image capture devices 1820 are arranged on the same plane, the structure is more compact, and the field of view between the fifth image capture device 1833 and the multiple fourth image capture devices 1820 is prevented from being blocked .
当然,在未示出的其他实施例中,也可以提供两个或者更多个第五图像采集装置1830。当仅提供两个第五图像采集装置1830,这两个第五图像采集装置1830可以设置在单指的左下方和右下方。当提供更多个第五图像采集装置1830时,排布方式更加灵活。Of course, in other embodiments not shown, two or more fifth image acquisition devices 1830 may also be provided. When only two fifth image capture devices 1830 are provided, the two fifth image capture devices 1830 may be arranged at the lower left and lower right of the single finger. When more fifth image capture devices 1830 are provided, the arrangement is more flexible.
示例性地,多个第五图像采集装置1830中的每个到手纹采集区的距离在72mm-82mm之间。这样设置可以保证手纹采集区处于第五图像采集装置1830成像较好的范围内,保证多个第二目标图像的清晰度。进一步地,多个第五图像采集装置1830中的每个到手纹采集区的距离可以为76mm,以达到更好的成像效果。Exemplarily, the distance from each of the plurality of fifth image capture devices 1830 to the handprint capture area is between 72mm-82mm. Such setting can ensure that the handprint collection area is within the better imaging range of the fifth image collection device 1830 and ensure the clarity of multiple second target images. Further, the distance between each of the plurality of fifth image capture devices 1830 and the handprint capture area may be 76 mm to achieve better imaging effect.
示例性地,多个第五图像采集装置1830中的每个的焦距可以在5-7mm之间。进一步保证手纹采集区处于第五图像采集装置1830较好的成像范围内,保证手纹采集系统的成像效果。并且,使手纹的图像在第二目标图像中所占的比例更适中。进一步地,第五图像采集装置1830中的每个的焦距可以为6mm。 Exemplarily, the focal length of each of the plurality of fifth image capture devices 1830 may be between 5-7mm. It is further ensured that the handprint collection area is within the better imaging range of the fifth image collection device 1830 to ensure the imaging effect of the handprint collection system. Moreover, the proportion of the image of the handprint in the second target image is more moderate. Further, the focal length of each of the fifth image capture devices 1830 may be 6mm.
示例性地,多个第五图像采集装置1830中的每个的视场范围大于或等于40×40mm。这样,可以保证多个第五图像采集装置1830的视场范围可以覆盖足够大的手纹采集区。Exemplarily, the field of view of each of the plurality of fifth image capture devices 1830 is greater than or equal to 40×40 mm. In this way, it can be ensured that the field of view ranges of the plurality of fifth image capture devices 1830 can cover a sufficiently large handprint capture area.
示例性地,多个第五图像采集装置1830中的每个的对角线视场在61°-63°之间。这样,多个第五图像采集装置1830经过布设,可以更全面的覆盖多个第二子区域,防止出现无法采集的死角区域。优选地,第五图像采集装置1830中的每个的对角线视场可以为62°。Exemplarily, the diagonal field of view of each of the plurality of fifth image capture devices 1830 is between 61°-63°. In this way, the multiple fifth image capture devices 1830 can cover multiple second sub-areas more comprehensively after being arranged, preventing the occurrence of dead angle areas that cannot be captured. Preferably, the diagonal field of view of each of the fifth image capture devices 1830 may be 62°.
示例性地,多个第五图像采集装置1830的景深不小于20mm。这样,这样,可以更好地适应手指或手掌的厚度,以及表面的起伏的高度差,保证成像质量。Exemplarily, the depth of field of the plurality of fifth image capture devices 1830 is not less than 20mm. In this way, it can better adapt to the thickness of fingers or palms, as well as the height difference of surface undulations, so as to ensure the imaging quality.
示例性地,多个第五图像采集装置1830中的每个的视场范围可以小于第四图像采集装置1820中的每个的视场范围。这样,可以通第五图像采集装置1830采集较小的第二子区域,通过第四图像采集装置1820采集相对较大的第一子区域。可以对手纹采集区进行更细化、更具有针对性的图像采集,采集质量更高。Exemplarily, the range of field of view of each of the plurality of fifth image capture devices 1830 may be smaller than the range of field of view of each of the fourth image capture devices 1820 . In this way, the fifth image acquisition device 1830 can be used to capture a smaller second sub-region, and the fourth image acquisition device 1820 can be used to capture a relatively larger first sub-region. More detailed and more targeted image collection can be carried out in the handprint collection area, and the collection quality is higher.
示例性地,多个第五图像采集装置1830中的每个的焦距可以大于多个第四图像采集装置1820中的每个的焦距。这样,第五图像采集装置1830采集的第二目标图像相比较第四图像采集装置1830采集的第一目标图像中主体(即手纹)所占的比例较大。第五图像采集装置1830可以更好地对较小的采集主体例如手指的指纹进行采集。第四图像采集装置1820可以采集相对较大的采集主体例如手掌的掌纹进行采集。Exemplarily, the focal length of each of the plurality of fifth image capture devices 1830 may be greater than the focal length of each of the plurality of fourth image capture devices 1820 . In this way, the second target image captured by the fifth image capture device 1830 has a larger proportion of the subject (ie, handprint) than the first target image captured by the fourth image capture device 1830 . The fifth image capture device 1830 can better capture fingerprints of smaller subjects such as fingers. The fourth image capture device 1820 can capture a relatively large capture subject such as palm prints of a palm.
示例性地,多个第五图像采集装置1830中的每个的对角线视场可以小于第四图像采集装置1820中的每个的对角线视场。即第五图像采集装置1830相对第四图像采集装置1820具有更小的视场范围。这样,第五图像采集装置1830采集较小的采集目标,可以减少干扰,保证全单指指纹图像的采集质量。第四图像采集装置1820可以用于采集较大的采集目标。较大的视场范围可以保证手纹图像的完整性。Exemplarily, the diagonal field of view of each of the plurality of fifth image capture devices 1830 may be smaller than the diagonal field of view of each of the fourth image capture devices 1820 . That is, the fifth image capture device 1830 has a smaller field of view than the fourth image capture device 1820 . In this way, the fifth image collection device 1830 collects smaller collection targets, which can reduce interference and ensure the collection quality of all single-fingerprint images. The fourth image capture device 1820 can be used to capture larger capture objects. The larger field of view can ensure the integrity of the handprint image.
示例性地,多个第四图像采集装置1820的光轴位于垂直于预定平面的第一平面内,多个第五图像采集装置1830的光轴位于垂直于预定平面的第二平面内,第一平面平行于第二平面。这样,第四图像采集装置采集的第一子区域中心与第五图像采集装置采集的第二子区域中心间隔开一定距离。即在手纹采集区可以划分出第一子区域和第二子区域,两者可以相互独立设置,也可以具有部分区域的重叠。在相应的区域内,可以进行更具有针对性的手纹采集,进一步保证每个子区域对于范围内的图像采集的质量。Exemplarily, the optical axes of the plurality of fourth image acquisition devices 1820 are located in a first plane perpendicular to the predetermined plane, the optical axes of the plurality of fifth image acquisition devices 1830 are located in a second plane perpendicular to the predetermined plane, and the first The plane is parallel to the second plane. In this way, the center of the first sub-region captured by the fourth image capture device is spaced a certain distance from the center of the second sub-region captured by the fifth image capture device. That is, the handprint collection area can be divided into a first sub-area and a second sub-area, and the two can be set independently of each other, and can also have partial overlapping areas. In the corresponding area, more targeted handprint collection can be performed to further ensure the quality of image collection within the range of each sub-area.
示例性地,参照图18-20,手纹采集系统还可以包括第六图像采集装置1840。第六图像采集装置1840的视场范围覆盖整个手纹采集区。这样,通 过第六图像采集装置1840可以实时获取到手纹采集区的图像。该手纹采集系统可以连接有显示装置或者该手纹采集系统本身包括显示装置。第六图像采集装置1840采集的手纹采集区内的图像可以通过显示装置实时显示出来,以起到预览作用。可以引导用户将单指、四指、双拇指、手掌或侧掌放置到合适的区域内,提高用户使用体验。Exemplarily, referring to FIGS. 18-20 , the handprint collection system may further include a sixth image collection device 1840 . The field of view of the sixth image capture device 1840 covers the entire handprint capture area. In this way, through The image of the handprint collection area can be obtained in real time through the sixth image collection device 1840 . The handprint collection system may be connected with a display device or the handprint collection system itself includes a display device. The images in the handprint collection area collected by the sixth image collection device 1840 can be displayed in real time by the display device to serve as a preview. It can guide users to place one finger, four fingers, two thumbs, palm or side palm in the appropriate area to improve user experience.
示例性地,第六图像采集装置1840的镜头可以位于预定平面内。这样,第六图像采集装置1840可以与多个第四图像采集装置1820位于同一平面内,结构设置紧凑,并且防止第六图像采集装置1840与多个第四图像采集装置1820之间的视野遮挡。Exemplarily, the lens of the sixth image capture device 1840 may be located in a predetermined plane. In this way, the sixth image capture device 1840 can be located in the same plane as the plurality of fourth image capture devices 1820 , the structure is compact, and the field of view between the sixth image capture device 1840 and the multiple fourth image capture devices 1820 is prevented from being blocked.
示例性地,第六图像采集装置1840的光轴与预定平面具有夹角,以使第六图像采集装置1840的光轴对准手纹采集区的中心。可以理解的是,在实际应用的过程中,由于手纹采集系统内排布空间的限制,第三图像采集装置1840的光轴与预定平面具有夹角的设置,使其可以相对于第四图像采集装置1820和第五图像采集装置1830距离手纹采集区较远。这样,使手纹采集系统内具有充足的空间用于多个第四图像采集装置1820和多个第五图像采集装置1830的安装。只需要保证第六图像采集装置1840的视场中心对准手纹采集区的中心即可。Exemplarily, the optical axis of the sixth image acquisition device 1840 has an included angle with the predetermined plane, so that the optical axis of the sixth image acquisition device 1840 is aligned with the center of the handprint collection area. It can be understood that, in the process of practical application, due to the limitation of the layout space in the handprint collection system, the optical axis of the third image collection device 1840 has an included angle with the predetermined plane, so that it can be compared with the fourth image The collection device 1820 and the fifth image collection device 1830 are far away from the handprint collection area. In this way, there is sufficient space in the handprint collection system for installation of multiple fourth image capture devices 1820 and multiple fifth image capture devices 1830 . It is only necessary to ensure that the center of the field of view of the sixth image capture device 1840 is aligned with the center of the handprint capture area.
示例性地,结合参照图18和19,手纹采集系统可以包括照明装置1850。照明装置1850可以包括一个或多个光源。照明装置1850可以是上述照明系统或者是上述照明系统的一部分。第一平面和第二平面位于照明装置1850和结构光投射器100之间。可以理解为照明装置1850和结构光投射器100位于手纹采集系统的两侧的边缘区域。多个第四图像采集装置1820和多个图像采集装置1830位于手纹采集系统的中心区域。Exemplarily, referring to FIGS. 18 and 19 in combination, the handprint collection system may include an illumination device 1850 . The lighting device 1850 may include one or more light sources. The lighting device 1850 may be the aforementioned lighting system or a part of the aforementioned lighting system. The first plane and the second plane are located between the lighting device 1850 and the structured light projector 100 . It can be understood that the illuminating device 1850 and the structured light projector 100 are located in the edge areas on both sides of the handprint collection system. A plurality of fourth image acquisition devices 1820 and a plurality of image acquisition devices 1830 are located in the central area of the handprint acquisition system.
示例性地,参照图18和20,手纹采集系统还可以包括壳体1860。多个第四图像采集装置1820可以设于壳体内部。可以更好地保证多个第四图像采集装置1820位置的稳定,防止误触等情况的发生,保证获取的第一目标图像的清晰度。示例性地,多个第五图像采集装置1830和第六图像采集装置1840可以设置在壳体1860内。结构光投射器100可以设置在壳体1860外部,保证结构光投射器100可以将结构光投射到手纹采集区即可。示例性地,手纹采集系统可以包括支撑座1870。壳体1860与支撑座1870相连,结构光投射器100可以安装在支撑座1870上。这样,支撑座1870的设置可以保证壳体1860与结构投射器100之间的相对位置的稳定。示例性地,壳体1860上对应手纹采集区可以设有透光板270。透光板270的设置可以起到防尘的作用。示例性地,透光板270可以安装在壳体1860内设定高度,以尽量避免照明装置1850的光经过透光板270所形成的虚像影响图像采集过程。多个第四图像采集装置1820、多个第五图像采集装置1830和第六图像采集装置1840的镜头设于预设平面内,其在透光板270上形成的虚像 的位置更集中,更便于确定透光板270的位置以避免透光板270上形成的虚像的干扰。并且,多个第四图像采集装置1820、多个第五图像采集装置1830和第六图像采集装置1840安装的位置更集中,可以使透光板270的安装更方便。示例性地,透光板270可以为玻璃材质。Exemplarily, referring to FIGS. 18 and 20 , the handprint collection system may further include a housing 1860 . A plurality of fourth image capture devices 1820 may be disposed inside the casing. It can better ensure the stability of the positions of the multiple fourth image capture devices 1820, prevent accidental touches and the like, and ensure the clarity of the captured first target image. Exemplarily, a plurality of fifth image capture devices 1830 and sixth image capture devices 1840 may be disposed in the casing 1860 . The structured light projector 100 can be arranged outside the housing 1860, so that the structured light projector 100 can project the structured light to the handprint collection area. Exemplarily, the handprint collection system may include a support base 1870 . The housing 1860 is connected to the support base 1870 , and the structured light projector 100 can be installed on the support base 1870 . In this way, the arrangement of the supporting seat 1870 can ensure the stability of the relative position between the housing 1860 and the structure projector 100 . Exemplarily, a transparent plate 270 may be provided on the housing 1860 corresponding to the handprint collection area. The setting of the light-transmitting plate 270 can play a role of dust prevention. Exemplarily, the light-transmitting plate 270 can be installed in the casing 1860 at a set height, so as to avoid as far as possible the virtual image formed by the light of the illuminating device 1850 passing through the light-transmitting plate 270 from affecting the image collection process. The lenses of multiple fourth image capture devices 1820, multiple fifth image capture devices 1830, and sixth image capture devices 1840 are arranged in a preset plane, and the virtual images formed on the light-transmitting plate 270 The positions are more concentrated, and it is easier to determine the position of the light-transmitting plate 270 to avoid the interference of the virtual image formed on the light-transmitting plate 270 . In addition, the multiple fourth image capture devices 1820 , the multiple fifth image capture devices 1830 and the sixth image capture devices 1840 are installed more centrally, which can make the installation of the light-transmitting plate 270 more convenient. Exemplarily, the transparent plate 270 may be made of glass.
示例性,手纹采集系统还可以设有顶盖。这样可以防止照明装置1850的光源对用户的人眼的刺激。Exemplarily, the handprint collection system may also be provided with a top cover. In this way, the light source of the lighting device 1850 can prevent the user's eyes from being stimulated.
根据本申请另一方面,提供了一种手纹采集系统。本申请实施例的手纹采集系统可以用于四指指纹的同时采集、双拇指指纹的同时采集、或掌纹的采集。相比于单指指纹的采集,多指指纹或掌纹采集时多指或掌纹的不同部位更容易出现存在倾斜角度(包括俯仰角、横滚角、偏航角)的情况,造成部分区域不在图像采集装置的可清晰成像范围而导致采集的图像不清晰。According to another aspect of the present application, a handprint collection system is provided. The handprint collection system of the embodiment of the present application can be used for simultaneous collection of four-finger fingerprints, simultaneous collection of double-thumb fingerprints, or palmprint collection. Compared with the collection of single-fingerprints, different parts of multiple fingers or palmprints are more likely to have tilt angles (including pitch angles, roll angles, and yaw angles) when collecting multi-fingerprints or palmprints, resulting in partial areas The captured image is not clear because it is not within the clear imaging range of the image capture device.
下面结合附图22至图29阐述一些实施例的手纹采集系统200的结构。如图22至图24所示,手纹采集系统200具有手纹采集区300。手纹采集区300的大小通常需适配手掌四指的大小。示例性地,手纹采集区300的长度可以为110mm-130mm,手纹采集区300的宽度可以为105mm-125mm。如图22、图24至图29所示,手纹采集系统200可以包括多个第七图像采集装置2220。可以理解的是,多个第七图像采集装置2220可以包括两个、三个、四个或者更多个第七图像采集装置,此处不做限定。多个第七图像采集装置2220可以采用工业相机或者相机模组或者其他类型的图像采集装置,此处不做限定。如图中所示的实施例,多个图像采集装置包括了两个第七图像采集装置。不论第七图像采集装置2220的数量为何,多个第七图像采集装置2220的镜头121可以围绕中心线设置,并且面向手纹采集区300。需要说明的是,此处及下文所说的中心线指的是多个第七图像采集装置2220的中心线。如图24、图27和图29中所示,中心线为直线MN。也就是说,多个第七图像采集装置2220的镜头121围绕在中心线MN的周围,以使多个第七图像采集装置的视野基本重合。中心线MN可以与手纹采集区300的中心和多个第七图像采集装置2220组成的整体的中心的连线基本平行。多个第七图像采集装置2220的镜头121面向手纹采集区300,这样,用户将手置入手纹采集区300后,多个第七图像采集装置2220的镜头121便于采集到处于手纹采集区300的用户的手纹。The structure of the handprint collection system 200 in some embodiments will be described below with reference to FIGS. 22 to 29 . As shown in FIGS. 22 to 24 , the handprint collection system 200 has a handprint collection area 300 . The size of the handprint collection area 300 usually needs to fit the size of the four fingers of the palm. Exemplarily, the length of the handprint collection area 300 may be 110mm-130mm, and the width of the handprint collection area 300 may be 105mm-125mm. As shown in FIG. 22 , FIG. 24 to FIG. 29 , the handprint collection system 200 may include a plurality of seventh image collection devices 2220 . It can be understood that the plurality of seventh image capture devices 2220 may include two, three, four or more seventh image capture devices, which is not limited here. The plurality of seventh image acquisition devices 2220 may adopt industrial cameras or camera modules or other types of image acquisition devices, which are not limited here. In the embodiment shown in the figure, the plurality of image capture devices includes two seventh image capture devices. Regardless of the number of seventh image capture devices 2220 , the lenses 121 of a plurality of seventh image capture devices 2220 can be arranged around the centerline and face the handprint capture area 300 . It should be noted that the centerlines mentioned here and below refer to the centerlines of the plurality of seventh image capture devices 2220 . As shown in FIGS. 24 , 27 and 29 , the centerline is a straight line MN. That is to say, the lenses 121 of the plurality of seventh image acquisition devices 2220 surround the centerline MN, so that the fields of view of the plurality of seventh image acquisition devices are basically overlapped. The central line MN may be substantially parallel to the line connecting the center of the handprint collection area 300 and the center of the whole composed of a plurality of seventh image collection devices 2220 . The lenses 121 of a plurality of seventh image acquisition devices 2220 face the handprint collection area 300, like this, after the user puts the hand into the handprint collection area 300, the lenses 121 of the plurality of seventh image acquisition devices 2220 are convenient to capture the area in the handprint collection area. 300 user's handprints.
多个第七图像采集装置2220分别具有距离其最佳物面前后景深范围内的可清晰成像物面子空间。也就是说,每个第七图像采集装置2220均具有各自对应的可清晰成像物面子空间。第七图像采集装置2220的最佳物面通常指的是与其像面共轭的物面。典型地,在图像采集装置的镜头的焦距不变的情况下,每个图像采集装置的像距基本为定值,这样像面也就是确定的,因此可以确定其最佳物面。在最佳物面前后的景深范围内仍然可以 清晰成像,最佳物面前后景深范围内的空间即为可清晰成像物面子空间。多个第七图像采集装置2220的可清晰成像物面子空间部分地重叠。多个第七图像采集装置2220的可清晰成像物面子空间可以共同形成可清晰成像总空间。在图示实施例中包括两个第七图像采集装置2220,这两个第七图像采集装置2220的可清晰成像物面子空间大体上沿着中心线的延伸方向存在交集,它们形成的可清晰成像总空间大于其中任一可清晰成像物面子空间。当然,在存在更多个第七图像采集装置2220的情况下,沿着中心线的延伸方向任意两个相邻的可清晰物面子空间存在交集且任意两个相邻的可清晰物面子空间的集合大于其中的任一可清晰物面子空间,由此,可以使得这些第七图像采集装置的可清晰成像总空间大于任一可清晰物面子空间。The plurality of seventh image acquisition devices 2220 respectively have object surface subspaces that can be clearly imaged within the range of depth of field from their best object front and back. That is to say, each of the seventh image acquisition devices 2220 has its own corresponding subspace of the clearly imageable object surface. The optimal object plane of the seventh image capture device 2220 generally refers to the object plane conjugate to its image plane. Typically, when the focal length of the lens of the image acquisition device is constant, the image distance of each image acquisition device is basically a constant value, so the image plane is also determined, so the optimal object plane can be determined. Depth of field range in front of and behind the best subject is still ok For clear imaging, the space within the depth of field in front of and behind the best object is the clear imaging object surface subspace. The clearly imageable object surface subspaces of the plurality of seventh image acquisition devices 2220 partially overlap. The clear imageable object surface subspaces of the plurality of seventh image acquisition devices 2220 may jointly form a clear imageable total space. In the illustrated embodiment, two seventh image acquisition devices 2220 are included. The clearly imageable object surface subspaces of the two seventh image capture devices 2220 generally overlap along the extension direction of the centerline, and the clearly imageable object plane subspaces formed by them intersect. The total space is larger than any one of the clearly imageable object surface subspaces. Of course, when there are more seventh image acquisition devices 2220, there is intersection between any two adjacent intelligible object plane subspaces along the extension direction of the center line, and the intersection of any two adjacent intelligible object plane subspaces The set is larger than any of the clear object plane subspaces, thus, the total clear imaging space of these seventh image acquisition devices can be larger than any of the clear object plane subspaces.
手纹采集区300可以包括可清晰成像物面总空间。当用户将手放入手纹采集区300的可清晰成像物面总空间时,无需接触,通过多个第七图像采集装置2220的镜头121便可使所采集形成的手纹成像可清晰。在一些优选的实施例中,手纹采集区300可以为可清晰成像物面总空间。这样,用户将手置入手纹采集区300的任何区域,均存在可采集到清晰的手纹图像的图像采集装置。当然,手纹采集区300也可以略大于可清晰成像物面总空间,典型地,在非接触采集的情况下,用户不会将手靠近手纹采集区300的边缘放置,以免接触到合围形成该手纹采集区300的壳体等。考虑到该因素,手纹采集区300也可以略大于可清晰成像物面总空间。The handprint collection area 300 may include a total space where an object plane can be clearly imaged. When the user puts his hand into the total space of the clear imageable object plane of the handprint collection area 300, the captured handprint images can be made clear through the lenses 121 of the plurality of seventh image collection devices 2220 without contact. In some preferred embodiments, the handprint collection area 300 may be the total space where the object plane can be clearly imaged. In this way, when the user places his hand in any area of the handprint collection area 300, there is an image collection device that can collect clear handprint images. Of course, the handprint collection area 300 can also be slightly larger than the total space of the object surface that can be clearly imaged. Typically, in the case of non-contact acquisition, the user will not place the hand close to the edge of the handprint collection area 300 to avoid contact with the surrounding area. The shell of the handprint collection area 300 and the like. Taking this factor into consideration, the handprint collection area 300 may also be slightly larger than the total space of the clear imaging object plane.
多个第七图像采集装置2220的最佳物面在手纹采集区300内的位置不同,再加上多个第七图像采集装置2220的可清晰成像物面子空间部分地重叠,可以确保无论可清晰成像物面总空间内的任何位置,都可以至少有一个第七图像采集装置2220能够采集到手纹的清晰的像。The optimal object planes of the plurality of seventh image acquisition devices 2220 have different positions in the handprint acquisition area 300, and the subspaces of the clearly imageable object planes of the plurality of seventh image acquisition devices 2220 partially overlap, which can ensure Clear imaging At any position in the total space of the object plane, at least one seventh image capture device 2220 can capture a clear image of the handprint.
本申请实施例的手纹采集系统200无需接触便可实现对手纹的采集工作。通过多个第七图像采集装置2220的可清晰成像物面子空间部分地重叠,以及多个第七图像采集装置2220的最佳物面在手纹采集区300内的位置不同,可以最大程度地扩大多个第七图像采集装置2220的可清晰成像总空间,使得手纹采集区300的采集范围较广。用户只要将手置入手纹采集区300,便可形成可清晰的成像,提高手纹采集的清晰度以避免在例如身份识别等的后续应用中该手纹图像不可用,减少对用户配合度的要求。The handprint collection system 200 of the embodiment of the present application can collect handprints without contact. Through partial overlapping of the clearly imageable object plane subspaces of multiple seventh image capture devices 2220, and the different positions of the optimal object planes of multiple seventh image capture devices 2220 in the handprint collection area 300, the maximum expansion can be achieved. The total space that can be clearly imaged by the plurality of seventh image acquisition devices 2220 makes the acquisition range of the handprint acquisition area 300 wider. As long as the user puts the hand into the handprint collection area 300, a clear image can be formed to improve the clarity of the handprint collection to avoid that the handprint image is unavailable in subsequent applications such as identification and reduce the impact on the cooperation of the user. Require.
示例性地,如图22至图24所示,手纹采集系统200还可以包括壳体1860。壳体1860的形状可以多种多样,此处不做限定。例如,壳体1860可以大体为长方体结构。多个第七图像采集装置2220可以位于壳体1860内。手纹采集系统200还可以包括罩体180。罩体180可以位于壳体1860的上方或下方。手纹采集区300可以位于罩体180内。罩体180与壳体1860可以采用一体成型的方式加工而成或者分体加工后连接在一起,此处不做 限定。壳体1860的面向手纹采集区300的面可透光。在图示实施例中,手纹采集区300通过壳体1860的顶板171和罩体180合围来限定。由此,顶板171形成了面向手纹采集区300的面。壳体1860的顶板171上可以设置有透光开口172。顶板171通过透光开口172实现可透光。当然,顶板171的一部分或者全部可以由例如玻璃等的透明材料形成,由此实现可透光。可选地,还可以通过移除该顶板171来实现可透光。如此设置,壳体1860可以对多个第七图像采集装置2220起到防护作用。罩体180相当于可以为手纹采集区300形成一个专有的区域,便于用户识别和操作。另外,通过壳体1860和罩体180也可以使得手纹采集系统200整体看起来更加美观,提高了用户的体验感。在有补光的情况下,罩体180可以避免了补光刺眼的问题,能够给用户带来良好的体验。Exemplarily, as shown in FIGS. 22 to 24 , the handprint collection system 200 may further include a housing 1860 . The shape of the casing 1860 can be varied, which is not limited here. For example, the housing 1860 may be substantially in the shape of a cuboid. A plurality of seventh image capture devices 2220 may be located within the housing 1860 . The handprint collection system 200 may also include a cover 180 . The cover 180 may be located above or below the housing 1860 . The handprint collection area 300 can be located in the cover body 180 . The cover body 180 and the housing 1860 can be integrally formed or connected together after separate processing, which is not done here limited. The surface of the housing 1860 facing the handprint collection area 300 can transmit light. In the illustrated embodiment, the handprint collection area 300 is defined by the enclosure of the top plate 171 of the housing 1860 and the cover 180 . Thus, the top plate 171 forms a surface facing the handprint collection area 300 . A light-transmitting opening 172 may be disposed on the top plate 171 of the housing 1860 . The top plate 171 can transmit light through the light-transmitting opening 172 . Of course, part or all of the top plate 171 may be formed of a transparent material such as glass, thereby realizing light transmission. Optionally, light transmission can also be achieved by removing the top plate 171 . In this way, the casing 1860 can protect the plurality of seventh image capture devices 2220 . The cover body 180 is equivalent to forming a dedicated area for the handprint collection area 300, which is convenient for users to identify and operate. In addition, the shell 1860 and the cover 180 can also make the overall appearance of the handprint collection system 200 more beautiful, improving the experience of the user. In the case of supplementary light, the cover body 180 can avoid the glare problem of supplementary light, and can bring good experience to users.
示例性地,在一些可选的实施例中,如图22至图23所示,手纹采集系统200还可以包括状态指示灯191。状态指示灯191可以为一个具有多色的指示灯条,颜色包括但不限于红、绿、蓝。如此,通过状态指示灯191的颜色变化来提示用户手纹采集系统200所处的状态。Exemplarily, in some optional embodiments, as shown in FIGS. 22 to 23 , the handprint collection system 200 may further include a status indicator light 191 . The status indicator light 191 can be a multi-color indicator bar, and the colors include but not limited to red, green, and blue. In this way, the state of the handprint collection system 200 is prompted to the user through the color change of the state indicator light 191 .
示例性地,在一些可选的实施例中,如图22至图23所示,手纹采集系统200还可以包括采集引导灯192。采集引导灯192可以位于手纹采集区300的周围,用于给用户展示手纹采集区300的范围,提示用户将四连指或者双拇指放置在手纹采集区300中。如此,采集引导灯192可以引导用户更加快速地将需要采集的手指放置在手纹采集区300,提高了手纹采集系统200的采集效率。Exemplarily, in some optional embodiments, as shown in FIGS. 22 to 23 , the handprint collection system 200 may further include a collection guide light 192 . The collection guide light 192 can be located around the handprint collection area 300 , and is used to show the range of the handprint collection area 300 to the user, prompting the user to place four fingers or two thumbs in the handprint collection area 300 . In this way, the collection guide light 192 can guide the user to place the finger to be collected in the handprint collection area 300 more quickly, which improves the collection efficiency of the handprint collection system 200 .
示例性地,在一些可选的实施例中,如图22至图23所示,手纹采集系统200还可以包括液晶显示屏193。如此设置,液晶显示屏193可以用作与用户的交互和对用户进行一些提示,如播放采集动画、实时显示预览图像、实时引导姿态调整、显示采集结果等;提高了用户使用手纹采集系统200的采集时的直观感。Exemplarily, in some optional embodiments, as shown in FIGS. 22 to 23 , the handprint collection system 200 may further include a liquid crystal display 193 . Set in this way, the liquid crystal display 193 can be used as an interaction with the user and provide some prompts to the user, such as playing and collecting animations, displaying preview images in real time, guiding posture adjustment in real time, displaying collection results, etc.; Intuitive sense of collection.
示例性地,多个第七图像采集装置2220的镜头121的焦距可以不同。通过焦距不同,即使第七图像采集装置2220的到手纹采集区300的距离相等,也可以使得多个第七图像采集装置2220的最佳物面位于手纹采集区300内的不同位置处。也就是说,通过焦距不同为多个第七图像采集装置2220设置在同一位置处提供了基础。示例性地,在第七图像采集装置的数量为两个时,在一些可选的实施例中,一个第七图像采集装置的焦距可以为8mm,另一个第七图像采集装置的焦距可以为6mm。这样,可保证在手纹采集区中与镜头前端距离不同的各个位置,两相机的视野相差不大,从而保证手纹采集系统200的分辨率(DPI)不会随各个位置与相机的距离不同而单调快速下降。通过使多个第七图像采集装置2220的镜头121具有不同焦距,可以简单轻松地实现多个第七图像采集装置2220的最佳物 面在手纹采集区300内的位置不同。而且,如后文中将要提到的,更便于设置补光灯。Exemplarily, the focal lengths of the lenses 121 of the plurality of seventh image capture devices 2220 may be different. With different focal lengths, even if the seventh image capture devices 2220 have the same distance to the handprint collection area 300 , the optimum object planes of multiple seventh image capture devices 2220 can be located at different positions in the handprint collection area 300 . That is to say, different focal lengths provide a basis for setting multiple seventh image capture devices 2220 at the same position. Exemplarily, when the number of the seventh image acquisition device is two, in some optional embodiments, the focal length of one seventh image acquisition device may be 8mm, and the focal length of the other seventh image acquisition device may be 6mm . In this way, it can be ensured that in each position in the handprint collection area with different distances from the front end of the lens, the field of view of the two cameras is not much different, thereby ensuring that the resolution (DPI) of the handprint collection system 200 will not vary with the distance between each position and the camera while monotonically decreasing rapidly. By making the lenses 121 of the plurality of seventh image acquisition devices 2220 have different focal lengths, it is possible to simply and easily realize the optimal object of the plurality of seventh image acquisition devices 2220. The positions of the faces in the handprint collection area 300 are different. Also, as will be mentioned later, it is easier to set up the fill light.
示例性地,多个第七图像采集装置2220的镜头121的前端到手纹采集区300的距离可以不同。此处及下文所说的方位术语“前端”指的是靠近手纹采集区300的一端。在图示实施例中,手纹采集区300通过壳体1860的顶板171和罩体180合围来限定。镜头121的前端到手纹采集区300的距离可以理解为手纹采集区300到顶板171的距离。在图示实施例中,镜头121朝向上拍摄手纹,在未示出的其他实施例中,镜头121也可以朝向下或者朝向侧面拍摄手纹,在此情况下,手纹采集区300相对于镜头121的位置也需要相应地调整。但大体上,镜头121的前端到手纹采集区300的距离可以理解为镜头121的前端到第七图像采集装置2220与手纹采集区300之间的隔板的距离。通过将多个第七图像采集装置2220的镜头121的前端到手纹采集区300的距离设置为不同,即使这些第七图像采集装置2220的焦距、像距、景深等参数大体上相同,也可以使得它们的最佳物面位于手纹采集区300的不同位置处,从而实现可清晰成像物面子空间的拼接。由此,可以选用相同的第七图像采集装置2220,而不用分别定制不同的图像采集装置,从而可以降低加工和仓储等方面的成本。而且,由于图像采集装置都采用同种类型,因此还可以避免因选错类型的图像采集装置而出现组装错误。Exemplarily, the distances from the front ends of the lenses 121 of the plurality of seventh image capture devices 2220 to the handprint capture area 300 may be different. The azimuthal term “front end” mentioned here and below refers to the end close to the handprint collection area 300 . In the illustrated embodiment, the handprint collection area 300 is defined by the enclosure of the top plate 171 of the housing 1860 and the cover 180 . The distance from the front end of the lens 121 to the handprint collection area 300 can be understood as the distance from the handprint collection area 300 to the top plate 171 . In the illustrated embodiment, the lens 121 faces upwards to capture the handprints. In other embodiments not shown, the lens 121 can also face down or to the side to capture the handprints. In this case, the handprint collection area 300 is relatively The position of the lens 121 also needs to be adjusted accordingly. But generally, the distance from the front end of the lens 121 to the handprint collection area 300 can be understood as the distance from the front end of the lens 121 to the partition between the seventh image acquisition device 2220 and the handprint collection area 300 . By setting the distances from the front ends of the lenses 121 of the plurality of seventh image acquisition devices 2220 to the handprint acquisition area 300 to be different, even if the parameters such as the focal length, image distance, and depth of field of these seventh image acquisition devices 2220 are substantially the same, it is possible to make Their optimal object planes are located at different positions of the handprint collection area 300, so as to realize stitching of subspaces that can clearly image the object plane. Therefore, the same seventh image capture device 2220 can be selected instead of customizing different image capture devices, thereby reducing the cost of processing and storage. Moreover, since the image acquisition devices are all of the same type, assembly errors due to wrong type of image acquisition devices can also be avoided.
示例性地,可选用焦距不同且镜头前端到手纹采集区距离不同的多个图像采集装置,可最大程度的扩大其组合而成的可清晰成像总空间。Exemplarily, a plurality of image acquisition devices with different focal lengths and different distances from the front end of the lens to the handprint acquisition area can be selected, which can maximize the total clear imaging space formed by their combination.
示例性地,多个第七图像采集装置2220的镜头121的光轴可以朝向中心线MN倾斜。光轴可以如图29所示OP、ST所示。在一些可选的实施例中,光轴OP和ST与中心线MN之间的夹角可以小于或等于10度。通过将镜头121的光轴朝向中心线MN倾斜,可以使得多个第七图像采集装置2220的视野中心大体上能够对准手纹采集区300的中心。通常情况下,用户趋向于将手置入手纹采集区300的中心区域放置时,多个第七图像采集装置2220的镜头121均朝向中心线MN倾斜,可以尽可能地保证各个图像采集装置视野大致重合、手位于各个第七图像采集装置2220的视野中心,进而保证成像质量。当然,本申请并不排除光轴OP和ST均平行于中心线MN的情况。Exemplarily, the optical axes of the lenses 121 of the plurality of seventh image capture devices 2220 may be inclined toward the central line MN. The optical axis can be shown as OP and ST as shown in FIG. 29 . In some optional embodiments, the angle between the optical axes OP and ST and the central line MN may be less than or equal to 10 degrees. By tilting the optical axis of the lens 121 toward the center line MN, the center of the field of view of the plurality of seventh image capture devices 2220 can be substantially aligned with the center of the handprint capture area 300 . Usually, when the user tends to place his hand in the central area of the handprint collection area 300, the lenses 121 of the plurality of seventh image collection devices 2220 are all inclined towards the center line MN, which can ensure that the field of view of each image collection device is approximately Overlapping, the hand is located in the center of the field of view of each seventh image acquisition device 2220, thereby ensuring the imaging quality. Of course, the present application does not exclude the situation that the optical axes OP and ST are both parallel to the central line MN.
示例性地,多个第七图像采集装置2220的最佳物面上的视野中心到中心线MN的距离小于或等于第一预设阈值。通常情况下,现有的图像采集装置拍摄图像时,远离其视野中心的边缘区域的图像会发生变形,因此为了保证成像质量,期望第七图像采集装置2220能够正对手拍摄,以保证手位于其视野中心。通过将多个第七图像采集装置2220的最佳物面上的视野中心到中心线MN的距离设置为小于或等于第一预设阈值,可以尽量保证 手放置在手纹采集区300内拍摄时,各个图像采集装置能够拍摄到大致相同的手的部位、手能够位于各个第七图像采集装置2220的视野的中心区域,避免位于视野的边缘区域,这样,可以避免多个第七图像采集装置2220采集到的手纹的图像发生扭曲,大大提高了多个第七图像采集装置2220采集到的手纹的图像保真度。示例性地,第一预设阈值为0-10mm;优选地,第一预设阈值可以为0-5mm;进一步优选地,第一预设阈值可以为0-2mm。Exemplarily, the distance from the center of the field of view on the optimal object plane of the plurality of seventh image capture devices 2220 to the center line MN is less than or equal to the first preset threshold. Usually, when an existing image acquisition device captures an image, the image in the edge area far away from the center of its field of view will be deformed. Therefore, in order to ensure the imaging quality, it is expected that the seventh image acquisition device 2220 can take a picture of the hand, so as to ensure that the hand is positioned at the other side. Center of vision. By setting the distance from the center of the field of view on the optimal object plane of the plurality of seventh image acquisition devices 2220 to the centerline MN to be less than or equal to the first preset threshold, it can be ensured as much as possible When the hand is placed in the handprint collection area 300 for shooting, each image collection device can capture approximately the same hand parts, and the hand can be located in the central area of the field of view of each seventh image collection device 2220, and avoid being located in the edge area of the field of view, so Therefore, distortion of the images of the handprints collected by the plurality of seventh image acquisition devices 2220 can be avoided, and the image fidelity of the handprints collected by the plurality of seventh image acquisition devices 2220 is greatly improved. Exemplarily, the first preset threshold is 0-10 mm; preferably, the first preset threshold may be 0-5 mm; further preferably, the first preset threshold may be 0-2 mm.
示例性地,如图22、图25至图27、图28所示,手纹采集系统200还可以包括第一补光灯2230。第一补光灯2230可以为整体或者分体结构设置的补光灯,此处不做限定。第一补光灯2230可以位于多个第七图像采集装置2220的周围。并且,第一补光灯2230的发光面可以面向手纹采集区300。第一补光灯2230可以大体呈圆形、方形、菱形、或椭圆形等各种形状,此处不做具体的限定。第一补光灯2230可以通过自动或者手动等任意合适的方式进行控制,此处不做说明。第一补光灯2230可以在缺乏光线条件情况下为所处的环境提供辅助光线。第一补光灯2230还可以具有调节亮度、色温等等的性能。第一补光灯2230主要是为例如四连指或者双拇指的手纹的底部补光。如此设置,当光线不足时,第一补光灯2230可以为手纹采集区300和多个第七图像采集装置2220的周围提供辅助光线,为多个第七图像采集装置2220的镜头121的采集提供了更好的拍摄环境,更利于在任何环境下采集到清晰的手纹图像,降低了因为光线不足而引起的所采集到的图像的噪点;进而提高了手纹采集系统200的采集效果;扩大了手纹采集系统200所适用的场景,提升了手纹采集系统200的通用性。Exemplarily, as shown in FIG. 22 , FIG. 25 to FIG. 27 , and FIG. 28 , the handprint collection system 200 may further include a first supplementary light 2230 . The first supplementary light 2230 may be a supplementary light provided in an integral or split structure, which is not limited here. The first fill light 2230 may be located around the plurality of seventh image capture devices 2220 . Moreover, the light-emitting surface of the first supplementary light 2230 may face the handprint collection area 300 . The first supplementary light 2230 can be generally in various shapes such as a circle, a square, a rhombus, or an ellipse, which is not specifically limited here. The first supplementary light 2230 can be controlled in any suitable manner such as automatic or manual, which will not be described here. The first supplementary light 2230 can provide auxiliary light for the surrounding environment under the condition of lack of light. The first supplementary light 2230 can also have the ability to adjust brightness, color temperature and so on. The first supplementary light 2230 is mainly for supplementary light at the bottom of the palm lines such as four fingers or two thumbs. In this way, when the light is insufficient, the first supplementary light 2230 can provide auxiliary light for the handprint collection area 300 and the surroundings of the plurality of seventh image acquisition devices 2220, so as to facilitate the acquisition of the lenses 121 of the plurality of seventh image acquisition devices 2220. A better shooting environment is provided, which is more conducive to collecting clear handprint images in any environment, and reduces the noise of the collected images caused by insufficient light; thereby improving the collection effect of the handprint collection system 200; The applicable scenarios of the handprint collection system 200 are expanded, and the versatility of the handprint collection system 200 is improved.
示例性地,如图23和图24所示,手纹采集系统200还可以包括透光板140。透光板140可以大体为方形、圆形、或椭圆形等各种形状,此处不做限定。多个第七图像采集装置2220和手纹采集区300可以分别位于透光板140的两侧。由前述可知,多个第七图像采集装置2220包括了镜头121。镜头121通常需要精细化的保养。将多个第七图像采集装置2220和手纹采集区300可以分别位于透光板140的两侧,多个第七图像采集装置2220可以通过透光板140可以对手纹采集区300内进行采集和拍摄。而且,透光板140还可以将多个第七图像采集装置2220与手纹采集区300分设在其两侧,相当于形成两个分隔的空间,这样,用户将手置入手纹采集区300时,不会触碰到多个第七图像采集装置2220的镜头121,避免多个第七图像采集装置2220的镜头121受到手的摩擦、碰撞而发生晃动或损坏,在一定程度上提高了多个第七图像采集装置2220的稳定性,延长了镜头121的使用寿命。Exemplarily, as shown in FIG. 23 and FIG. 24 , the handprint collection system 200 may further include a light-transmitting plate 140 . The light-transmitting plate 140 can be generally in various shapes such as square, circular, or elliptical, which is not limited here. The plurality of seventh image capture devices 2220 and the handprint capture area 300 may be respectively located on both sides of the light-transmitting plate 140 . It can be known from the foregoing that the plurality of seventh image capture devices 2220 includes the lens 121 . Lens 121 usually requires careful maintenance. The plurality of seventh image acquisition devices 2220 and the handprint collection area 300 can be respectively located on both sides of the light-transmitting plate 140, and the plurality of seventh image acquisition devices 2220 can pass through the light-transmitting plate 140 to collect and collect handprints in the handprint collection area 300. shoot. Moreover, the light-transmitting plate 140 can also divide a plurality of seventh image acquisition devices 2220 and the handprint collection area 300 on its two sides, which is equivalent to forming two separate spaces. , will not touch the lenses 121 of the plurality of seventh image acquisition devices 2220, avoiding the lens 121 of the plurality of seventh image acquisition devices 2220 from being shaken or damaged by hand friction and collision, and improving the plurality of lenses 121 to a certain extent The stability of the seventh image acquisition device 2220 prolongs the service life of the lens 121 .
示例性地,在多个第七图像采集装置2220的镜头121的前端到手纹采集区300的距离相同的情况下,多个第七图像采集装置2220的镜头121的前端的中心位于与中心线MN垂直的镜头平面上。在前述的一些实施例 中,多个第七图像采集装置2220的镜头121的光轴可以朝向中心线MN倾斜,这样,多个镜头121也会互相倾斜,多个镜头121的平面未必共面,为了便于描述,将多个第七图像采集装置2220的镜头121的前端的中心所在的且垂直于中心线MN的平面定义为镜头平面。第一补光灯2230的发光面到镜头平面的距离小于或等于第二预设阈值。需要说明的是,第一补光灯2230的发光面到镜头平面的距离大于第二预设阈值时,第一补光灯2230对多个第七图像采集装置2220的镜头121在光线不足情况下的采集效果会显著降低。在一些可选的实施例中,第一补光灯2230的发光面可以与镜头平面共面。如此设置,第一补光灯2230可以进一步为多个第七图像采集装置2220的镜头121提供较佳的补光效果并减少灯影。Exemplarily, in the case that the distances from the front ends of the lenses 121 of the plurality of seventh image acquisition devices 2220 to the handprint acquisition area 300 are the same, the centers of the front ends of the lenses 121 of the plurality of seventh image acquisition devices 2220 are located at the center line MN vertical to the lens plane. In some of the aforementioned examples Among them, the optical axes of the lenses 121 of the plurality of seventh image acquisition devices 2220 can be inclined toward the center line MN, so that the plurality of lenses 121 will also be inclined to each other, and the planes of the plurality of lenses 121 may not be coplanar, and for the convenience of description, more A plane where the center of the front end of the lens 121 of the seventh image capture device 2220 is located and perpendicular to the centerline MN is defined as a lens plane. The distance from the light emitting surface of the first fill light 2230 to the lens plane is less than or equal to the second preset threshold. It should be noted that, when the distance from the light-emitting surface of the first supplementary light 2230 to the lens plane is greater than the second preset threshold, the first supplementary light 2230 will provide a sufficient response to the lenses 121 of the plurality of seventh image capture devices 2220 in the case of insufficient light. The collection effect will be significantly reduced. In some optional embodiments, the light emitting surface of the first fill light 2230 may be coplanar with the lens plane. In this way, the first supplementary light 2230 can further provide better supplementary light effects for the lenses 121 of the plurality of seventh image capture devices 2220 and reduce lamp shadows.
示例性地,如图22、图25和图26所示,第一补光灯2230可以呈包围多个第七图像采集装置2220的环形。如此设置,第一补光灯2230可以在多个第七图像采集装置2220的大体周向方向上其提供补光,典型地,用户的手会放置在手纹采集区300的中心区域内进行拍摄,第一补光灯2230可以呈包围多个第七图像采集装置2220的环形大体上能够围绕手设置,进而从各个方向上提供均匀的补光,避免在手纹图像中出现明暗对比,进而提高成像质量。Exemplarily, as shown in FIG. 22 , FIG. 25 and FIG. 26 , the first fill light 2230 may be in a ring shape surrounding a plurality of seventh image capture devices 2220 . In this way, the first supplementary light 2230 can provide supplementary light in the general circumferential direction of the plurality of seventh image capture devices 2220. Typically, the user's hand will be placed in the central area of the handprint collection area 300 for shooting The first supplementary light 2230 can be arranged in a ring shape surrounding the plurality of seventh image acquisition devices 2220 and can be arranged around the hand, thereby providing uniform supplementary light from all directions, avoiding the contrast between light and dark in the handprint image, and further improving image quality.
示例性地,如图22、图25和图26所示,第一补光灯2230可以包括第一C形灯板131和第二C形灯板132。第一C形灯板131和第二C形灯板132的开口相对并合围形成环形。可以理解的是,第一C形灯板131和第二C形灯板132可以为相同的半圆环形,也可以一个为优弧、另一个为劣弧,此处不做限定。优选地,环形的内部直径可以约为80-90mm,外径可以约为100-110mm。如此设置,通过两个C形灯板相对合围形成环形的第一补光灯2230,可以使得第一补光灯2230的更换较简单;如果其中一个C形灯板出故障,仅需要更换对应的C形灯板即可,无需更换整个第一补光灯2230,降低了维护成本。另外,C形灯板的形状更加平滑,几乎不存在突变的棱角,相当于保护了多个第七图像采集装置2220。Exemplarily, as shown in FIG. 22 , FIG. 25 and FIG. 26 , the first supplementary light 2230 may include a first C-shaped light board 131 and a second C-shaped light board 132 . Openings of the first C-shaped lamp panel 131 and the second C-shaped lamp panel 132 are opposite and surrounded to form a ring. It can be understood that the first C-shaped lamp panel 131 and the second C-shaped lamp panel 132 may be in the same semi-circular shape, or one may be a superior arc and the other may be a inferior arc, which is not limited here. Preferably, the ring may have an inner diameter of about 80-90 mm and an outer diameter of about 100-110 mm. With this arrangement, the first fill light 2230 can be replaced relatively easily by two C-shaped lamp panels enclosing each other to form a ring-shaped first fill light 2230; if one of the C-shaped lamp panels fails, only the corresponding one needs to be replaced. The C-shaped light board is sufficient, and there is no need to replace the entire first supplementary light 2230, which reduces maintenance costs. In addition, the shape of the C-shaped light board is smoother, and there are almost no abrupt edges and corners, which is equivalent to protecting multiple seventh image acquisition devices 2220 .
示例性地,如图22、图25和图27示,环形可以有豁口133。豁口133可以为贯通的或者为与环形相连的一个豁口133,此处不做限定。如图22、图24至图29所示,手纹采集系统200还可以包括结构光投射器100。结构光透射器100可以采用现有技术中已知的或者未来可能出现的各种结构光投射器100,此处不做限定。结构光投射器100的投射端可以设置在豁口133。这样,结构光透射器100的投射端发出的光可以通过豁口133发射出去。也就是说,结构光透射器100的投射端发出的光穿过多个第七图像采集装置2220周围的环形。由此可知,结构光透射器100的投射端发出的光也位于多个第七图像采集装置2220的周围。示例性地,如图25所示,手纹采集系统200还可以包括底座194。结构光投射器100和多个第七图像采集装置2220可以均支撑在底座194上。如此,底座194可以支撑和固 定结构光投射器100和多个第七图像采集装置2220。如此设置,结构光投射器100可以与多个第七图像采集装置2220相互配合,形成手指的三维形貌数据,使得手纹采集系统200采集的手纹信息更加立体和形象。另外,结构光投射器100可以与多个第七图像采集装置2220相互配合可以对手指进行实时姿态检测,引导用户调整手指姿态。Exemplarily, as shown in FIGS. 22 , 25 and 27 , the ring shape may have a notch 133 . The notch 133 can be through or a notch 133 connected with a ring, which is not limited here. As shown in FIG. 22 , FIG. 24 to FIG. 29 , the handprint collection system 200 may further include a structured light projector 100 . The structured light transmitter 100 may adopt various structured light projectors 100 known in the prior art or may appear in the future, which is not limited here. The projecting end of the structured light projector 100 can be disposed in the notch 133 . In this way, the light emitted from the projecting end of the structured light transmitter 100 can be emitted through the gap 133 . That is to say, the light emitted from the projection end of the structured light transmitter 100 passes through the ring around the plurality of seventh image capture devices 2220 . It can be known from this that the light emitted from the projection end of the structured light transmitter 100 is also located around the plurality of seventh image capture devices 2220 . Exemplarily, as shown in FIG. 25 , the handprint collection system 200 may further include a base 194 . The structured light projector 100 and the plurality of seventh image acquisition devices 2220 may both be supported on the base 194 . In this way, base 194 can support and secure A fixed structured light projector 100 and a plurality of seventh image acquisition devices 2220. With this arrangement, the structured light projector 100 can cooperate with multiple seventh image acquisition devices 2220 to form three-dimensional topography data of fingers, making the handprint information collected by the handprint collection system 200 more three-dimensional and vivid. In addition, the structured light projector 100 can cooperate with multiple seventh image acquisition devices 2220 to perform real-time posture detection of the finger, and guide the user to adjust the posture of the finger.
示例性地,沿着垂直于中心线MN的侧向方向,第一补光灯2230到多个第七图像采集装置2220之间的距离小于或等于第三预设阈值。示例性地,第三预设阈值可以为0-10mm;优选地,第三预设阈值可以为0-5mm;进一步优选地,第三预设阈值可以为0-2mm。当第一补光灯2230到多个第七图像采集装置2220之间的距离较大时,第一补光灯2230发出的光线可能会不能垂直地照射用户的手,这样有可能在经过位于多个第七图像采集装置2220和手纹采集区300之间的透光板270折射后产生光线发生变形,或者由于接收到透光板270的反射光线,而影响成像。Exemplarily, along a lateral direction perpendicular to the central line MN, the distance between the first fill light 2230 and the plurality of seventh image capture devices 2220 is less than or equal to a third preset threshold. Exemplarily, the third preset threshold may be 0-10mm; preferably, the third preset threshold may be 0-5mm; further preferably, the third preset threshold may be 0-2mm. When the distance between the first supplementary light 2230 and the plurality of seventh image capture devices 2220 is large, the light emitted by the first supplementary light 2230 may not be able to illuminate the user's hand vertically. The light is deformed after being refracted by the transparent plate 270 between the seventh image acquisition device 2220 and the handprint collection area 300 , or the imaging is affected due to the light reflected by the transparent plate 270 .
示例性地,如图22至图29所示,手纹采集系统200还可以包括第二补光灯2260。第二补光灯2260可以大体呈条形、圆形、方形、菱形、或椭圆形等各种形状,此处不做具体的限定。如图24和图26所示,第二补光灯2260在垂直于中心线MN的基准面内的投影可以位于手纹采集区300在基准面内的投影的边缘区域和/或外围区域。由此,可以避免第二补光灯2260进入多个第七图像采集装置2220的视野内,并且避免遮挡第一补光灯2230的光线。第二补光灯2260主要是为例如四连指或者双拇指的手纹的侧面补光。第二补光灯2260与多个第七图像采集装置2220均位于手纹采集区300的同侧。例如,位于手纹采集区300的下方。第二补光灯2260可以位于手纹采集区300和多个第七图像采集装置2220之间。第二补光灯2260可以朝向手纹采集区300的中心倾斜。倾斜的第二补光灯2260可以垂直照射手的侧面,从而最大限度地提高被拍摄的部位的亮度。Exemplarily, as shown in FIGS. 22 to 29 , the handprint collection system 200 may further include a second supplementary light 2260 . The second supplementary light 2260 can generally be in various shapes such as strip, circle, square, rhombus, or ellipse, which are not specifically limited here. As shown in FIG. 24 and FIG. 26 , the projection of the second fill light 2260 in the reference plane perpendicular to the central line MN may be located in the edge area and/or peripheral area of the projection of the handprint collection area 300 in the reference plane. Thus, the second supplementary light 2260 can be prevented from entering the field of vision of the plurality of seventh image capture devices 2220 and the light of the first supplementary light 2230 can be avoided. The second fill light 2260 is mainly for the side fill light of, for example, four fingers or two thumbs. The second fill light 2260 and the plurality of seventh image capture devices 2220 are located on the same side of the handprint capture area 300 . For example, it is located below the handprint collection area 300 . The second supplementary light 2260 may be located between the handprint collection area 300 and the plurality of seventh image collection devices 2220 . The second fill light 2260 can be inclined towards the center of the handprint collection area 300 . The angled second fill light 2260 can illuminate the side of the hand vertically, thus maximizing the brightness of the part being photographed.
示例性地,如图23和图24所示,手纹采集系统200还可以包括透光板270。透光板270可以大体为方形、圆形、或椭圆形等各种形状,此处不做限定。多个第七图像采集装置2220和手纹采集区300可以分别位于透光板270的两侧。多个第七图像采集装置2220的镜头121通常需要精细化的保养。将多个第七图像采集装置2220和手纹采集区300可以分别位于透光板270的两侧,多个第七图像采集装置2220可以通过透光板270对手纹采集区300内对象进行图像采集。而且,透光板270还可以将多个第七图像采集装置2220与手纹采集区300分设在其两侧,相当于形成两个分隔的空间,这样,用户将手置入手纹采集区300时,不会触碰到多个第七图像采集装置2220的镜头121,避免多个第七图像采集装置2220的镜头121受到手的摩擦、碰撞而发生晃动或损坏,在一定程度上提高了多个第七图像采集装置2220的稳定性,延长了镜头121的使用寿命。 Exemplarily, as shown in FIG. 23 and FIG. 24 , the handprint collection system 200 may further include a light-transmitting plate 270 . The light-transmitting plate 270 can be generally in various shapes such as square, circular, or oval, which is not limited here. The plurality of seventh image capture devices 2220 and the handprint capture area 300 may be respectively located on both sides of the light-transmitting plate 270 . The lenses 121 of the plurality of seventh image capture devices 2220 generally require careful maintenance. The plurality of seventh image acquisition devices 2220 and the handprint acquisition area 300 can be respectively located on both sides of the light-transmitting plate 270, and the plurality of seventh image acquisition devices 2220 can collect images of objects in the hand-print acquisition area 300 through the light-transmitting plate 270 . Moreover, the light-transmitting plate 270 can also divide a plurality of seventh image acquisition devices 2220 and the handprint collection area 300 on its two sides, which is equivalent to forming two separate spaces. , will not touch the lenses 121 of the plurality of seventh image acquisition devices 2220, avoiding the lens 121 of the plurality of seventh image acquisition devices 2220 from being shaken or damaged by hand friction and collision, and improving the plurality of lenses 121 to a certain extent The stability of the seventh image acquisition device 2220 prolongs the service life of the lens 121 .
示例性地,如图24所示,第二补光灯2260可以位于手纹采集区300和透光板270之间。结合前述可知,手纹采集区300至多个第七图像采集装置2220两者之间(含两者)依次为,手纹采集区300、第二补光灯2260、透光板270、多个第七图像采集装置2220。如此设置,第二补光灯2260可以在透光板270的上方对多个第七图像采集装置2220的采集环境进行补光。在手纹采集系统200还设置第一补光灯2230的实施例中,第二补光灯2260可以与第一补光灯2230分别在高低部位对多个第七图像采集装置2220的采集环境进行补光,手纹采集系统200可获得对比度较好的手纹图像,此时,第二补光灯2260的作用更加显著。第二补光灯2260相比于透光板270更靠近手纹采集区300,可以避免在透光板270上形成第二补光灯2260的灯影而影响图像采集效果。Exemplarily, as shown in FIG. 24 , the second supplementary light 2260 may be located between the handprint collection area 300 and the light-transmitting plate 270 . In combination with the foregoing, it can be seen that the sequence between the handprint collection area 300 and the plurality of seventh image acquisition devices 2220 (including both) is the handprint collection area 300, the second supplementary light 2260, the light-transmitting plate 270, and the plurality of seventh image acquisition devices 2220. Seven image acquisition devices 2220 . In this way, the second supplementary light 2260 can provide supplementary light to the collection environment of the plurality of seventh image collection devices 2220 above the light-transmitting plate 270 . In the embodiment where the handprint collection system 200 is further equipped with the first supplementary light 2230, the second supplementary light 2260 and the first supplementary light 2230 can respectively monitor the collection environment of the plurality of seventh image collection devices 2220 at high and low positions. For supplementary light, the handprint collection system 200 can obtain a handprint image with better contrast. At this time, the effect of the second supplementary light 2260 is more significant. The second fill light 2260 is closer to the handprint collection area 300 than the light-transmitting plate 270 , which can avoid forming a shadow of the second fill light 2260 on the light-transmitting plate 270 and affecting the image collection effect.
示例性地,如图22、图25和图26所示,第二补光灯2260可以包括第一灯板2261、第二灯板2262和第三灯板2263。第一灯板2261的投影可以位于手纹采集区300的指尖侧。第二灯板2262的投影可以位于手纹采集区300的左侧。第三灯板2263的投影可以位于手纹采集区300的右侧。第一灯板2261、第二灯板2262和第三灯板2263可以形成“门”形结构。“门”形结构大体上包围手纹采集区300在基准面上的投影即可。第一灯板2261、第二灯板2262和第三灯板2263可以连接成一体或者彼此独立地设置,此处不做限定。由前述可以理解,“门”形结构的开口可以与手指伸入手纹采集区300的进出方向大体一致。如此设置,第一灯板2261可以对置入手纹采集区300的手指的指尖侧进行补光。第二灯板2262可以对置入手纹采集区300的手指的左侧进行补光。第三灯板2263可以对置入手纹采集区300的手指的右侧进行补光。三者的共同作用可以使得第二补光灯2260对手指的侧面进行补光,提高了手纹采集系统200的采集效果。Exemplarily, as shown in FIG. 22 , FIG. 25 and FIG. 26 , the second supplementary light 2260 may include a first lamp board 2261 , a second lamp board 2262 and a third lamp board 2263 . The projection of the first light board 2261 may be located on the fingertip side of the handprint collection area 300 . The projection of the second light board 2262 may be located on the left side of the handprint collection area 300 . The projection of the third lamp board 2263 may be located on the right side of the handprint collection area 300 . The first lamp panel 2261, the second lamp panel 2262 and the third lamp panel 2263 may form a "door"-shaped structure. The “gate”-shaped structure generally encloses the projection of the handprint collection area 300 on the reference plane. The first light board 2261 , the second light board 2262 and the third light board 2263 may be connected as one or set independently of each other, which is not limited here. It can be understood from the foregoing that the opening of the “door”-shaped structure may be roughly consistent with the direction in which the fingers enter and exit the fingerprint collection area 300 . In this way, the first light board 2261 can supplement the light on the fingertip side of the finger placed in the handprint collection area 300 . The second light board 2262 can supplement the light on the left side of the finger placed in the handprint collection area 300 . The third light board 2263 can supplement the light on the right side of the finger placed in the handprint collection area 300 . The combined effect of the three can make the second supplementary light 2260 provide supplementary light to the side of the finger, which improves the collection effect of the handprint collection system 200 .
示例性地,如图28所示,第一灯板2261与手纹采集区300之间具有第一夹角α。如图27所示,第二灯板2262与手纹采集区300之间具有第二夹角β。第三灯板2263与手纹采集区300之间具有第三夹角γ。第一夹角α小于第二夹角β。第二夹角β等于第三夹角γ。优选地,第一夹角α可以为15-20°。优选地,第二夹角β可以为30-40°。通常情况下,手指左右两侧的手指表面与指腹的手指表面之间的夹角(钝角)大体上是相同的,因此第二夹角β可以等于第三夹角γ。而且,指尖的手指表面与指腹的手指表面之间的夹角(钝角)较大,相比之下,手指左右两侧的手指表面与指腹的手指表面之间的夹角(钝角)较小,第一夹角α小于第二夹角β和第三夹角γ可以使得第一灯板2261、第二灯板2262和第三灯板2263均能够垂直地照射到相应的手指侧面,从而提高被拍摄区域的亮度。如此设置,第二补光灯2260可以大体对处于可清晰成像总空间的手指的侧面均有较好的补光,提高了补光效果;进而提升了手纹采集系统200的整体采集效果。 Exemplarily, as shown in FIG. 28 , there is a first angle α between the first light board 2261 and the handprint collection area 300 . As shown in FIG. 27 , there is a second included angle β between the second light board 2262 and the handprint collection area 300 . There is a third angle γ between the third lamp board 2263 and the handprint collection area 300 . The first included angle α is smaller than the second included angle β. The second included angle β is equal to the third included angle γ. Preferably, the first included angle α may be 15-20°. Preferably, the second included angle β may be 30-40°. Usually, the included angle (obtuse angle) between the finger surface on the left and right sides of the finger and the finger surface of the finger pulp is substantially the same, so the second included angle β may be equal to the third included angle γ. Moreover, the angle (obtuse angle) between the finger surface of the fingertip and the finger surface of the finger pulp is relatively large, compared with the angle (obtuse angle) between the finger surface of the left and right sides of the finger and the finger surface of the finger pulp Smaller, the first included angle α is smaller than the second included angle β and the third included angle γ, so that the first light board 2261, the second light board 2262 and the third light board 2263 can all illuminate the corresponding side of the finger vertically, Thereby increasing the brightness of the area being photographed. With such setting, the second supplementary light 2260 can generally provide good supplementary light to the sides of the fingers in the clear imaging space, which improves the supplementary light effect; and further improves the overall collection effect of the handprint collection system 200 .
示例性地,手纹采集系统200还可以包括第八图像采集装置(图中未示出)。第八图像采集装置可以位于多个第七图像采集装置2220的侧面。第八图像采集装置的镜头121的光轴可以朝向中心线MN倾斜设置,并且用于拍摄手纹采集区300内的图像。这样,第八图像采集装置可以更好地获取位于与第七图像采集装置2220拍摄的平面不同的平面上的手纹信息,从而获得各手指的全单指指纹图像。全单指指纹包括单指的指肚正面和两个侧面的指纹,可对全单指指纹图像进行处理,得到模拟滚动捺印的指纹。多个第七图像采集装置2220可以用于拍摄四指指纹、双拇指纹、平掌掌纹和侧掌掌纹中至少一者。当第七图像采集装置2220和第八图像采集装置配合使用时,可以拍摄全单指指纹。处理装置可以将第七图像采集装置2220和第八图像采集装置采集的图像拼接到3D指纹模型的3D指纹表面上以获得3D指纹图像。而且,第八图像采集装置与结构光投射器的配合,获得的指纹面积远大于传统的通过接触获得的指纹面积,采集质量更高。当需要高质量、全单指指纹时,只需要将该手指放置在图像采集区内悬停片刻,就可以获得高清晰度且完整的单指指纹图像。无需要手指在采集装置上从一侧滚动到另一侧以增加总接触面积。本方案对于不顺从的个体尤其适用,不会出现指纹图像模糊的情况。当然,第八图像采集装置也可以位于多个第七图像采集装置2220的指尖侧,用户拍摄指尖处的指纹。第八图像采集装置也可以位于多个第七图像采集装置2220的指根侧,用于拍摄指跟处(即第一个指节以上的部分、最靠近第一个指节线处)的指纹。或者同时包括上述的第八图像采集装置中的任意多种的组合。本文不期望对第八图像采集装置的数量进行限制。Exemplarily, the handprint collection system 200 may further include an eighth image collection device (not shown in the figure). The eighth image capture device may be located on the side of the plurality of seventh image capture devices 2220 . The optical axis of the lens 121 of the eighth image capture device can be inclined towards the centerline MN, and is used to capture images in the handprint capture area 300 . In this way, the eighth image acquisition device can better acquire the handprint information on a plane different from the plane captured by the seventh image acquisition device 2220, so as to obtain a full single-fingerprint image of each finger. Full-single-fingerprints include fingerprints on the front and two sides of the finger pad of a single finger, and can process the image of a full-single-finger fingerprint to obtain a fingerprint that simulates rolling and printing. A plurality of seventh image capture devices 2220 may be used to capture at least one of four-fingerprints, double-thumbprints, flat-palm palmprints, and side-palm palmprints. When the seventh image capture device 2220 is used in conjunction with the eighth image capture device, all single finger prints can be photographed. The processing device may stitch the images collected by the seventh image collection device 2220 and the eighth image collection device onto the 3D fingerprint surface of the 3D fingerprint model to obtain a 3D fingerprint image. Moreover, with the cooperation of the eighth image acquisition device and the structured light projector, the area of the fingerprint obtained is much larger than that obtained by traditional contact, and the acquisition quality is higher. When a high-quality, full single-fingerprint is required, you only need to place the finger in the image acquisition area and hover for a moment to obtain a high-definition and complete single-fingerprint image. There is no need to roll a finger from side to side on the capture device to increase the total contact area. This solution is especially suitable for individuals who are not compliant, and there will be no blurring of fingerprint images. Of course, the eighth image capture device may also be located on the side of the fingertips of the plurality of seventh image capture devices 2220, and the user captures the fingerprints at the fingertips. The eighth image capture device can also be located on the root side of the plurality of seventh image capture devices 2220, and is used to capture fingerprints at the heel of the finger (that is, the part above the first knuckle, closest to the first knuckle line) . Or simultaneously include any combination of the above-mentioned eighth image acquisition device. It is not intended herein to limit the number of the eighth image acquisition device.
该手纹采集系统包括了多个第七图像采集装置2220和第八图像采集装置,通过多个第七图像采集装置2220和第八图像采集装置的配合,可以满足单指、四指、双拇指、平掌以及侧掌等多种采集需求,使得该手纹采集系统的应用范围更广。The handprint collection system includes multiple seventh image collection devices 2220 and eighth image collection devices. , flat palm and side palm and other collection requirements make the application of the handprint collection system wider.
示例性地,第八图像采集装置可以为多组。每组对应一个第七图像采集装置2220。也就是说,第八图像采集装置的组数与第七图像采集装置2220的数量一致,且一一对应。每组第八图像采集装置可以用于拍摄各自对应的第七图像采集装置的可清晰成像子空间内的图像。相互对应的第八图像采集装置组和第七图像采集装置具有匹配的可清晰成像子空间。示例性地,每组可以包括两个第八图像采集装置,分别位于多个第七图像采集装置2220的左侧和右侧,用于配合其对应的第七图像采集装置2220拍摄全单指图像。当然,每组还可以包括一个第八图像采集装置或者更多个第八图像采集装置。如此,当用户的手位于一个第七图像采集装置2220的可清晰成像子空间时,该第七图像采集装置2220对应的一组第八图像采集装置可以同时拍摄用户的手的图像,从而保证该第七图像采集装置2220和对应的一组第八图像采集装置能够清晰地成像,进而保证正面和侧面手纹拍 摄效果。Exemplarily, there may be multiple groups of eighth image acquisition devices. Each group corresponds to a seventh image acquisition device 2220 . That is to say, the number of groups of the eighth image capture device is the same as the number of the seventh image capture device 2220 , and there is a one-to-one correspondence. Each group of eighth image acquisition devices may be used to capture images in the clearly imageable subspaces of the corresponding seventh image acquisition devices. The eighth image capture device group and the seventh image capture device corresponding to each other have matching sharply imageable subspaces. Exemplarily, each group may include two eighth image capture devices, which are respectively located on the left and right sides of the plurality of seventh image capture devices 2220, and are used to cooperate with the corresponding seventh image capture devices 2220 to capture full single-finger images. . Of course, each group may also include one eighth image acquisition device or more eighth image acquisition devices. In this way, when the user's hand is located in a subspace that can be clearly imaged by a seventh image capture device 2220, a group of eighth image capture devices corresponding to the seventh image capture device 2220 can simultaneously capture images of the user's hand, thereby ensuring the The seventh image acquisition device 2220 and a corresponding group of eighth image acquisition devices can clearly image, thereby ensuring that the front and side handprints can be taken photo effects.
示例性地,第七图像采集装置2220为N个。第八图像采集装置为一组。手纹采集系统200还可以包括角度调节装置(图中未示出)。角度调节装置可以连接至第八图像采集装置,用于使第八图像采集装置的镜头121的光轴相对于中心线MN可以具有至少N个倾斜角度。在位于第n个倾斜角度时,n属于{1,2,3,…,N-1,N}。第八图像采集装置用于拍摄第n个第七图像采集装置的可清晰成像子空间。以一个具体的实施例进行描述,假定第七图像采集装置为3个,即,N为3;第八图像采集装置为一组;角度调节装置可以使第八图像采集装置的镜头的光轴相对于中心线MN可以具有至少3个倾斜角度;在位于第1个倾斜角度时,第八图像采集装置可以用于拍摄第1个第七图像采集装置的可清晰成像子空间;在位于第2个倾斜角度时,第八图像采集装置可以用于拍摄第2个第七图像采集装置的可清晰成像子空间;在位于第3个倾斜角度时,第八图像采集装置可以用于拍摄第3个第七图像采集装置的可清晰成像子空间。如此设置,通过角度调节装置和一组第八图像采集装置,便可配合所有的第七图像采集装置2220拍摄到清晰的手纹图像,减少了第八图像采集装置的配置,减少了零件的数量,降低了成本;也减少了其所占用的空间,优化了手纹采集系统200的结构,利于其的小型化设计。Exemplarily, there are N seventh image acquisition devices 2220 . The eighth image acquisition device is a group. The handprint collection system 200 may also include an angle adjustment device (not shown in the figure). The angle adjustment device may be connected to the eighth image capture device for making the optical axis of the lens 121 of the eighth image capture device have at least N tilt angles with respect to the central line MN. At the n-th inclination angle, n belongs to {1, 2, 3, . . . , N-1, N}. The eighth image acquisition device is used to capture the clear imaging subspace of the nth seventh image acquisition device. Describe with a specific embodiment, assume that the seventh image acquisition device is 3, that is, N is 3; The eighth image acquisition device is a group; The angle adjustment device can make the optical axis of the lens of the eighth image acquisition device relatively There can be at least 3 inclination angles on the centerline MN; when located at the first inclination angle, the eighth image acquisition device can be used to shoot the clear imaging subspace of the first seventh image acquisition device; When tilted at an angle, the eighth image capture device can be used to shoot the clear imaging subspace of the second and seventh image capture devices; at the third tilt angle, the eighth image capture device can be used to shoot the third Seven subspaces that can be clearly imaged by the image acquisition device. With such an arrangement, through the angle adjustment device and a group of eighth image acquisition devices, clear handprint images can be taken with all the seventh image acquisition devices 2220, which reduces the configuration of the eighth image acquisition device and reduces the number of parts , reduces the cost; also reduces the space it occupies, optimizes the structure of the handprint collection system 200, and facilitates its miniaturization design.
本申请实施例的手纹采集系统可以用于四指指纹的同时采集、双拇指指纹的同时采集、或掌纹的采集。相比于单指指纹的采集,多指指纹或掌纹采集时多指或掌纹的不同部位更容易出现存在倾斜角度(包括俯仰角、横滚角、偏航角)的情况,造成部分区域不在图像采集装置的可清晰成像范围而导致采集的图像不清晰。现有技术中,要求用户将手部放置于特定的高度、并对手部姿态有诸多限制,如此才能够对对手纹清晰成像。为了至少部分地解决上述技术问题,本申请实施例提供一种非接触式目标对象手纹的采集方法。实现该非接触式目标对象手纹的采集方法的装置可以是上述手纹采集系统200。在一个或一些实施例中,实现该非接触式目标对象手纹的采集方法的装置可以是如图22-29所示的手纹采集系统200。手纹采集系统可以包括预览图像采集装置、一个或多个预设图像采集装置(例如可清晰成像物面子空间部分重叠的多个图像采集装置),其中,预览图像采集装置可以是一个或多个预设图像采集装置之一,也可以是独立于一个或多个预设图像采集装置的单独图像采集装置。预览图像采集装置和/或预设图像采集装置可以是任意类型的图像采集装置,例如工业相机、相机模组等。图30示出了根据本申请一个实施例非接触式目标对象手纹的采集方法3000的示意性流程图,如图30所示,该方法3000可以包括以下采集步骤S3010,姿态判断步骤S3020,获取步骤S3030和处理步骤S3040。The handprint collection system of the embodiment of the present application can be used for simultaneous collection of four-finger fingerprints, simultaneous collection of double-thumb fingerprints, or palmprint collection. Compared with the collection of single-fingerprints, different parts of multiple fingers or palmprints are more likely to have tilt angles (including pitch angles, roll angles, and yaw angles) when collecting multi-fingerprints or palmprints, resulting in partial areas The captured image is not clear because it is not within the clear imaging range of the image capture device. In the prior art, the user is required to place the hand at a specific height, and there are many restrictions on the posture of the hand, so that the handprint can be clearly imaged. In order to at least partly solve the above technical problems, an embodiment of the present application provides a non-contact method for collecting handprints of a target object. The device for realizing the non-contact target object's handprint collection method may be the above-mentioned handprint collection system 200 . In one or some embodiments, the device implementing the non-contact target object handprint collection method may be a handprint collection system 200 as shown in FIGS. 22-29 . The handprint acquisition system may include a preview image acquisition device, one or more preset image acquisition devices (for example, multiple image acquisition devices that can clearly image the overlapping subspaces of the object surface), wherein the preview image acquisition device may be one or more One of the preset image capture devices may also be a separate image capture device independent of one or more preset image capture devices. The preview image capture device and/or the preset image capture device may be any type of image capture device, such as an industrial camera, a camera module, and the like. Fig. 30 shows a schematic flowchart of a method 3000 for collecting non-contact handprints of a target object according to an embodiment of the present application. Step S3030 and processing step S3040.
采集步骤S3010,利用预览图像采集装置针对手纹采集区采集预览手纹图像,手纹采集区用于放置手,预览手纹图像中包括目标对象的手纹, 目标对象包括手的四指、双拇指、手掌中至少一者。上述三维目标可以是或包括目标对象。手纹采集系统200中的一个或多个图像采集装置可以包括预览图像采集装置,一个或多个图像采集装置所采集的目标图像可以包括预览手纹图像。Acquisition step S3010, using the preview image acquisition device to acquire a preview handprint image for the handprint collection area, the handprint collection area is used to place hands, and the preview image includes the handprint of the target object, The target object includes at least one of the four fingers of the hand, the two thumbs, and the palm. The aforementioned three-dimensional target may be or include a target object. One or more image capture devices in the handprint capture system 200 may include preview image capture devices, and target images captured by the one or more image capture devices may include preview handprint images.
当对目标对象进行手纹图像采集时,可以一次性的将目标对象放入手纹采集系统200的手纹采集区。例如,目标对象为四指时,可以将四指同时放入手纹采集区,而非单个手指分四次放入,如此,可一次性的获取四个手指的指纹图像,提高采集效率、减少用户配合次数。返回参见图25,预设图像采集装置可以是第七图像采集装置2220,预览图像采集装置可以是预设图像采集装置2220之一,或者其他单独的图像采集装置(图25未示出这类单独的图像采集装置)。在一个实施例中,可以利用预览图像采集装置采集目标对象的预览手纹图像。用户将手放到手纹采集区时,预览图像采集装置可以获取当前采集时刻(例如t时刻)下,用户的预览手纹图像。When collecting the handprint image of the target object, the target object can be put into the handprint collection area of the handprint collection system 200 at one time. For example, when the target object is four fingers, the four fingers can be put into the handprint collection area at the same time, instead of a single finger being put in four times. In this way, the fingerprint images of the four fingers can be obtained at one time, which improves the collection efficiency and reduces the User cooperation times. Referring back to FIG. 25, the preset image capture device may be the seventh image capture device 2220, and the preview image capture device may be one of the preset image capture devices 2220, or other separate image capture devices (such separate image capture devices are not shown in FIG. 25 ). image capture device). In one embodiment, the preview image of the handprint of the target object can be collected by a preview image collection device. When the user puts the hand in the handprint collection area, the preview image collection device can obtain the user's preview handprint image at the current collection time (for example, time t).
姿态判断步骤S3020,基于预览手纹图像确定姿态判断对象的姿态是否合格,如果合格,则转至获取步骤S3030;其中,姿态包括以下姿态指标中至少一者:位置、偏航角、横滚角和俯仰角;其中,姿态判断对象为目标对象或处理对象。Attitude judging step S3020, determine whether the posture of the posture judging object is qualified based on the previewed handprint image, if qualified, go to the acquisition step S3030; wherein the posture includes at least one of the following posture indicators: position, yaw angle, roll angle and the pitch angle; wherein, the attitude judgment object is the target object or the processing object.
示例性地,根据获取的预览手纹图像,可以判断姿态判断对象在预览手纹图像采集时间(例如上述t时刻)下的姿态是否合格。在一个实施例中,姿态判断对象可以是目标对象或处理对象。处理对象可以表示目标对象中包含的单个手指或手掌。例如,在目标对象是手的四指的情况下,姿态判断对象可以是目标对象(四指),也可以是处理对象(单个手指)。姿态判断对象为四指时,四指整体出姿态合格或不合格的结论。姿态判断对象为单个手指时,会就单个手指给出姿态合格或不合格的结论。同样的,在目标对象是手的双拇指的情况下,姿态判断对象可以是目标对象(双拇指),也可以是处理对象(单个拇指)。在目标对象是手掌的情况下,姿态判断对象只能是手掌。Exemplarily, according to the acquired preview palmprint image, it can be judged whether the posture of the gesture judgment object at the preview palmprint image acquisition time (for example, the above-mentioned time t) is qualified. In one embodiment, the gesture judgment object may be a target object or a processing object. A processing object can represent a single finger or a palm contained in a target object. For example, in the case that the target object is four fingers of the hand, the posture judgment object may be the target object (four fingers) or the processing object (single finger). When the posture judgment object is four fingers, the four fingers as a whole can conclude that the posture is qualified or unqualified. When the posture judgment object is a single finger, a conclusion of qualified or unqualified posture will be given for the single finger. Similarly, in the case that the target object is the double thumbs of the hand, the posture judgment object may be the target object (double thumbs) or the processing object (single thumb). In the case where the target object is the palm, the gesture judgment object can only be the palm.
在一个示例中,姿态判断对象可以是目标对象。判断目标对象的姿态是否合格可以包括判断目标对象在预览手纹图像中的位置、偏航角、横滚角、俯仰角中的一种多种是否合格。这种情况下,目标对象整体(例如四个手指全部)姿态均合格,才视为目标对象的姿态合格,并可以执行后续的获取步骤S3030。如果目标对象中存在任意一个或多个处理对象的姿态不合格,则可以视为该目标对象的姿态不合格,不执行后续的获取步骤S3030。上述位置可以表示目标对象在预览手纹图像中的位置是否偏左或偏右、是否处于预览手纹图像的中心区域等。In one example, the pose determination object may be a target object. Judging whether the posture of the target object is qualified or not may include judging whether one or more of the position, yaw angle, roll angle, and pitch angle of the target object in the previewed handprint image are qualified or not. In this case, the posture of the target object is considered qualified only if the overall posture of the target object (for example, all four fingers) is qualified, and the subsequent acquisition step S3030 can be performed. If the posture of any one or more processing objects in the target object is unqualified, it may be considered that the posture of the target object is unqualified, and the subsequent acquiring step S3030 is not performed. The above position may indicate whether the position of the target object in the previewed palmprint image is to the left or to the right, whether it is in the central area of the previewed palmprint image, and so on.
在另一个示例中,姿态判断对象可以是处理对象。判断处理对象的姿 态是否合格可以包括判断处理对象在预览手纹图像中的位置、偏航角、横滚角、俯仰角中的一种多种是否合格。这种情况下,当前处理对象的姿态均合格,就可以执行后续的获取步骤S3030。In another example, the gesture judgment object may be a processing object. Judging the posture of the processing object Whether the status is qualified may include judging whether one or more of the position, yaw angle, roll angle, and pitch angle of the processing object in the previewed palmprint image is qualified. In this case, if the poses of the currently processed objects are all qualified, the subsequent acquiring step S3030 can be performed.
获取步骤S3030,获取目标对象所包含的处理对象对应的待处理手纹图像;待处理手纹图像中包括姿态合格的处理对象,处理对象为目标对象中的单个手指或手掌;待处理手纹图像是根据该处理对象对应的目标手纹图像确定的;目标手纹图像是目标图像采集装置针对手纹采集区在距离预览手纹图像采集时间指定时间区间内采集的;目标手纹图像中包括目标对象的手纹;目标图像采集装置的可清晰成像物面子空间与处理对象的高度相匹配。手纹采集系统200中的一个或多个图像采集装置可以包括目标图像采集装置,一个或多个图像采集装置所采集的目标图像可以包括目标手纹图像。The acquisition step S3030 is to acquire the to-be-processed palmprint image corresponding to the processing object included in the target object; the to-be-processed handprint image includes the processing object with a qualified posture, and the processing object is a single finger or palm in the target object; the to-be-processed handprint image It is determined according to the target handprint image corresponding to the processing object; the target handprint image is collected by the target image acquisition device for the handprint collection area within the specified time interval from the preview handprint image collection time; the target handprint image includes the target The handprint of the object; the clearly imageable object surface subspace of the target image acquisition device matches the height of the object to be processed. One or more image acquisition devices in the handprint collection system 200 may include target image capture devices, and target images captured by the one or more image capture devices may include target handprint images.
在确定处理对象姿态合格后,获取处理对象对应的待处理手纹图像。如果目标对象对应的全部处理对象姿态均合格,可获取全部处理对象对应的待处理手纹图像;如果目标对象对应的部分处理对象姿态合格,可获取该部分处理对象对应的待处理手纹图像。After it is determined that the posture of the processing object is qualified, the to-be-processed handprint image corresponding to the processing object is acquired. If the postures of all processing objects corresponding to the target object are qualified, the to-be-processed handprint images corresponding to all the processing objects can be obtained; if the postures of some processing objects corresponding to the target object are qualified, the to-be-processed handprint images corresponding to this part of the processing objects can be obtained.
待处理手纹图像是根据目标手纹图像确定的。目标手纹图像中包括目标对象的手纹,待处理手纹图像是仅包括单个处理对象的手纹图像,待处理手纹图像是根据目标手纹图像确定的。例如,目标对象为四指、处理对象为中指时,目标手纹图像包括四指,待处理手纹图像是从包括四指的目标手纹图像中截取的、仅包含中指的图像。处理对象为单指时,待处理手纹图像可以是分割出单指整个手指的图像,可以是单指指纹区域图像,也可以是分割出单指指纹的图像,在此不做限制。当目标手纹图像包括结构光通道和非结构光通道时,可以通过对目标手纹图像的非结构光通道进行目标分割来确定待处理手纹图像。The handprint image to be processed is determined according to the target handprint image. The target handprint image includes the handprint of the target object, the to-be-processed handprint image only includes a single processing object, and the to-be-processed handprint image is determined according to the target handprint image. For example, when the target object is four fingers and the processing object is the middle finger, the target handprint image includes four fingers, and the to-be-processed handprint image is an image that includes only the middle finger and is intercepted from the target handprint image including four fingers. When the processing object is a single finger, the palmprint image to be processed may be an image of the entire finger of the single finger, an image of the fingerprint region of the single finger, or an image of the fingerprint of the single finger, which is not limited here. When the target handprint image includes a structured light channel and an unstructured light channel, the handprint image to be processed can be determined by performing target segmentation on the unstructured light channel of the target handprint image.
可理解的是,根据目标手纹图像确定待处理手纹图像的步骤可以是在获取步骤S3030执行的过程中进行的,也可以是在姿态判断步骤S3020执行的过程中或者在姿态判断步骤S3020执行之后且在获取步骤S3030执行之前进行的。It can be understood that the step of determining the handprint image to be processed according to the target handprint image may be performed during the execution of the acquisition step S3030, or during the execution of the posture judgment step S3020 or in the posture judgment step S3020 It is performed after and before the execution of the acquiring step S3030.
处理对象的待处理手纹图像是根据该处理对象对应的目标手纹图像确定的,可以理解的是,属于同一目标对象的不同处理对象可能对应相同的目标手纹图像,也可能对应不同的目标手纹图像,可能对应相同的目标图像采集装置,也可能对应不同的目标图像采集装置。如果在某一时刻,根据属于同一目标对象的第一处理对象和第二处理对象的高度,确定出二者对应的目标图像采集装置相同,则可根据该目标采集装置采集的同一目标手纹图像分别获取包含这两个处理对象的两个待处理手纹图像。如果在某一时刻,根据属于同一目标对象的第一处理对象和第二处理对象的高度, 确定第一处理对象和第二处理对象对应不同的目标图像采集装置,对于任意两个不同处理对象来说,也可以根据两个不同的目标手纹图像分别获取包含这两个处理对象的两个待处理手纹图像。The to-be-processed handprint image of the processing object is determined according to the target handprint image corresponding to the processing object. It can be understood that different processing objects belonging to the same target object may correspond to the same target handprint image, or may correspond to different target The handprint images may correspond to the same target image acquisition device, or may correspond to different target image acquisition devices. If at a certain moment, according to the heights of the first processing object and the second processing object belonging to the same target object, it is determined that the target image acquisition devices corresponding to the two are the same, then the handprint image of the same target collected by the target acquisition device can be Two to-be-processed handprint images containing the two processing objects are acquired respectively. If at a certain moment, according to the heights of the first processing object and the second processing object belonging to the same target object, It is determined that the first processing object and the second processing object correspond to different target image acquisition devices. For any two different processing objects, it is also possible to acquire two images containing the two processing objects according to two different target handprint images. Handprint image to be processed.
处理对象的目标手纹图像是用目标图像采集装置采集的,目标图像采集装置的可清晰成像物面子空间与处理对象的高度相匹配。处理对象的高度用于确定目标图像采集装置。处理对象的高度可以根据传感器例如距离传感器确定,也可以根据预览手纹图像确定。手纹采集系统的图像采集装置有多种设置方案。The target handprint image of the processing object is collected by the target image acquisition device, and the clearly imageable object surface subspace of the target image acquisition device matches the height of the processing object. The height of the processed object is used to determine the target image capture device. The height of the processing object may be determined according to a sensor such as a distance sensor, or may be determined according to a preview image of the handprint. The image acquisition device of the handprint acquisition system has various setting schemes.
方案1,手纹采集系统的图像采集装置只包括预览图像采集装置,该预览图像采集装置不仅用于预览,也用于目标手纹图像的采集。此时预览图像采集装置具有足够大的可清晰成像物面空间,足以对四指、双拇指或手掌进行清晰成像。目标图像采集装置即为预览图像采集装置,预览图像采集装置的可清晰成像物面子空间与处理对象的高度相匹配。Solution 1, the image acquisition device of the handprint acquisition system only includes a preview image acquisition device, and the preview image acquisition device is not only used for preview, but also for acquisition of target handprint images. At this time, the preview image acquisition device has a large enough clear imageable object plane space, which is sufficient for clear imaging of four fingers, double thumbs or palms. The target image acquisition device is the preview image acquisition device, and the clearly imageable object surface subspace of the preview image acquisition device matches the height of the processing object.
在该方案下,目标手纹图像可以是判断其中处理对象姿态合格的预览手纹图像,也可以是在确定处理对象姿态合格后重新拍摄的图像(当预览手纹图像是在较低分辨率下拍摄,或者认为重新拍摄可更好对焦时通常这样处理)。Under this scheme, the target handprint image can be a preview image of the handprint image that is judged to be qualified in the attitude of the processing object, or it can be an image re-taken after the attitude of the processing object is determined to be qualified (when the preview image is at a lower resolution This is usually done when a shot is taken, or when a reshoot is thought to allow for better focus).
方案2,包括一个预览图像采集装置和与预览图像采集装置不同的预设图像采集装置(预设图像采集装置可以为一个或多个),预览图像采集装置采集的预览手纹图像只用于判断手部姿态,预设图像采集装置用于目标手纹图像的采集,也就是说目标图像采集装置选自预设图像采集装置。预设图像采集装置为多个时,多个预设图像采集装置的可清晰成像物面子空间部分重叠,具体用哪个预设图像采集装置采集作为目标图像采集装置要根据图像采集装置的可清晰成像物面子空间与处理对象高度的匹配性来确定。例如,有预设图像采集装置A和B,二者的可清晰成像物面子空间分别为[h1h2]以及[h2h3](仅以单点重合为例,两个可清晰成像物面子空间当然可以区间重合)。当处理对象高度落入h1-h2之间时,用A作为目标图像采集装置;当处理对象高度落入h2-h3之间时,用B作为目标图像采集装置。Option 2, including a preview image acquisition device and a preset image acquisition device different from the preview image acquisition device (the preset image acquisition device can be one or more), and the preview handprint image collected by the preview image acquisition device is only used for judging For hand gestures, the preset image capture device is used to capture target handprint images, that is to say, the target image capture device is selected from the preset image capture devices. When there are multiple preset image acquisition devices, the clear imageable object surface subspaces of multiple preset image acquisition devices partially overlap. It is determined by the matching between the subspace of the object surface and the height of the processing object. For example, there are preset image acquisition devices A and B, and the clear imageable object surface subspaces of the two are [h1h2] and [h2h3] respectively (just take the single-point coincidence as an example, the two clear imageable object surface subspaces can of course be interval coincide). When the height of the processing object falls between h1-h2, use A as the target image acquisition device; when the height of the processing object falls between h2-h3, use B as the target image acquisition device.
在该方案下,目标手纹图像可以是和预览手纹图像几乎同时拍摄的(这种情况下,预览图像采集装置采集图像时,预设图像采集装置也在采集图像,根据预览手纹图像确定处理对象的高度后,可从预设图像采集装置中选取目标图像采集装置,目标图像采集装置采集的图像也就成为目标手纹图像),也可以是在确定处理对象姿态合格、根据预览手纹图像确定处理对象的高度并根据高度选取目标图像采集装置后由目标图像采集装置重新拍摄的图像。Under this scheme, the target handprint image can be taken almost simultaneously with the preview image After processing the height of the object, the target image acquisition device can be selected from the preset image acquisition devices, and the image collected by the target image acquisition device will become the target handprint image), or it can be determined that the attitude of the processing object is qualified, according to the preview of the handprint After determining the height of the processing object and selecting the target image acquisition device according to the height, the image is re-taken by the target image acquisition device.
方案3,包括多个预设图像采集装置,预览图像采集装置属于预设图 像采集装置中的一个,预览图像采集装置采集的预览手纹图像用于判断手部姿态,预览图像采集装置和预设图像采集装置均用于目标手纹图像的采集,具体用哪个图像采集装置采集作为目标图像采集装置要根据图像采集装置的可清晰成像物面子空间与处理对象高度的匹配性来确定。例如,有预览图像采集装置C,预设图像采集装置A和B,三者的可清晰成像物面子空间分别为h0-h1,h1-h2以及h2-h3。当处理对象高度落入h0-h1之间时,用C作为目标图像采集装置;当处理对象高度落入h1-h2之间时,用A作为目标图像采集装置;当处理对象高度落入h2-h3之间时,用B作为目标图像采集装置。也就是说目标图像采集装置可能是预览图像采集装置,也能是预设图像采集装置。在该方案下,目标手纹图像可以是预览手纹图像(在这种情况下,根据预览手纹图像确定处理对象的高度后,预览图像采集装置被选取为目标图像采集装置),也可以是和预览手纹图像几乎同时拍摄的(这种情况下,预览图像采集装置采集图像时,预设图像采集装置也在采集图像,根据预览手纹图像确定处理对象的高度后,根据高度选取的目标图像采集装置是预设图像采集装置而不是预览图像采集装置,目标图像采集装置采集的图像也就成为目标手纹图像),还可以是在确定处理对象姿态合格、根据预览手纹图像确定处理对象的高度并根据高度选取目标图像采集装置后由目标图像采集装置重新拍摄的图像(此时目标图像采集装置可以是预览图像采集装置或预设图像采集装置)。Scheme 3, including multiple preset image acquisition devices, the preview image acquisition device belongs to the preset image As one of the image acquisition devices, the preview image acquisition device collects the preview handprint image for judging the hand posture, the preview image acquisition device and the preset image acquisition device are both used for the acquisition of the target handprint image, which image acquisition device is used specifically Acquisition as the target image acquisition device should be determined according to the matching between the clearly imageable object surface subspace of the image acquisition device and the height of the processing object. For example, there is preview image acquisition device C, preset image acquisition devices A and B, and the clear imageable object surface subspaces of the three are h0-h1, h1-h2 and h2-h3 respectively. When the processing object height falls between h0-h1, use C as the target image acquisition device; when the processing object height falls between h1-h2, use A as the target image acquisition device; when the processing object height falls within h2- Between h3, use B as the target image acquisition device. That is to say, the target image capture device may be a preview image capture device, or a preset image capture device. Under this scheme, the target handprint image can be a preview handprint image (in this case, after the height of the processing object is determined according to the preview handprint image, the preview image acquisition device is selected as the target image acquisition device), or it can be It is shot almost simultaneously with the preview handprint image (in this case, when the preview image acquisition device collects images, the preset image acquisition device is also collecting images, after determining the height of the processing object according to the preview handprint image, the target selected according to the height The image acquisition device is a preset image acquisition device rather than a preview image acquisition device, and the image collected by the target image acquisition device also becomes the target handprint image), it can also be determined after the attitude of the processing object is qualified, and the processing object is determined according to the preview handprint image and the image re-taken by the target image capture device after selecting the target image capture device according to the height (the target image capture device can be a preview image capture device or a preset image capture device at this time).
可理解的是,无论目标图像采集装置是否是预览图像采集装置,处理对象对应的目标采集装置均可以满足以下要求:处理对象对应的目标采集装置的可清晰成像物面子空间与处理对象的高度相匹配。这样可以使得在目标手纹图像中能够获得清晰成像的处理对象。在一个实施例中,目标对象整体确定高度、以目标对象的高度作为目标对象包括的各处理对象的高度,此时所有处理对象可以对应于同样的目标图像采集装置,这种方案适用于姿态判断对象为整手时处理对象高度的确定。在另一种实施例中,每个处理对象可以单独确定高度。在两个处理对象具有不同高度的情况下,这两个处理对象可能对应于不同的目标图像采集装置,也就对应于不同的目标手纹图像。It is understandable that no matter whether the target image acquisition device is a preview image acquisition device or not, the target acquisition device corresponding to the processing object can meet the following requirements: the clearly imageable object surface subspace of the target acquisition device corresponding to the processing object is consistent with the height of the processing object. match. In this way, a clearly imaged processing object can be obtained in the target palmprint image. In one embodiment, the height of the target object is determined as a whole, and the height of the target object is used as the height of each processing object included in the target object. At this time, all processing objects can correspond to the same target image acquisition device. This solution is suitable for posture judgment Handles determination of object height when the object is a full hand. In another embodiment, the height can be determined individually for each processing object. In the case that two processing objects have different heights, the two processing objects may correspond to different target image acquisition devices, that is, correspond to different target handprint images.
目标手纹图像是目标图像采集装置针对手纹采集区在距离预览手纹图像采集时间指定时间区间内采集的。指定时间区间可以根据需要设定为任意合适的时长。例如,指定时间区间可以是[0,Δt],其中Δt的大小可以处于例如[1 100]ms的范围内。目标手纹图像的采集时间距离预览手纹图像的采集时间不宜过长,以免人手的姿态发生较大改变,出现虽然根据预览手纹图像判断姿态判断对象的姿态合格,但是实际采集时姿态不合格的情况。目标手纹图像距离预览手纹图像采集时间指定时间区间内采集,可以包括目标手纹图像与预览手纹图像几乎同时采集的情况,也可以包括目标手纹图像在预览手纹图像采集之后采集的情况。需要特别说明的是,上述目标 手纹图像就是预览手纹图像的情况,也可以算作是在距离预览手纹图像采集时间指定时间区间内采集目标手纹图像的情况。The target handprint image is collected by the target image collection device for the handprint collection area within a specified time interval from the preview handprint image collection time. The designated time interval may be set to any appropriate duration as required. For example, the specified time interval may be [0, Δt], where the magnitude of Δt may be in the range of [1 100] ms, for example. The collection time of the target handprint image should not be too long from the collection time of the preview handprint image, so as to avoid a large change in the posture of the human hand. Although the posture of the object is judged to be qualified according to the preview handprint image, the posture of the actual collection is unqualified. Case. The target handprint image is collected within a specified time interval from the preview handprint image collection time, which can include the situation that the target handprint image and the preview handprint image are collected almost at the same time, and can also include the target handprint image collected after the preview handprint image is collected Condition. In particular, the above objectives The handprint image is the case of previewing the handprint image, and it can also be regarded as the case of collecting the target handprint image within a specified time interval from the collection time of the previewed handprint image.
处理步骤S3040,对所获取的待处理手纹图像进行处理。Processing step S3040, processing the acquired handprint image to be processed.
例如,对所获取的待处理手纹图像进行处理可以包括以下一种或多种:对待处理手纹图像进行手纹识别;将待处理手纹图像变换为模拟捺印手纹图像。将待处理手纹图像变换为模拟捺印手纹图像可以通过将待处理手纹图像进行三维重建和平面展开来实现,这将在下文描述。For example, processing the acquired handprint image to be processed may include one or more of the following: performing fingerprint recognition on the handprint image to be processed; transforming the handprint image to be processed into a simulated stamped fingerprint image. The transformation of the to-be-processed handprint image into a simulated stamped handprint image can be realized by performing three-dimensional reconstruction and planar expansion of the to-be-processed handprint image, which will be described below.
根据本申请实施例的非接触式目标对象手纹的采集方法及手纹采集系统,利用预览图像采集装置采集的预览手纹图像进行姿态判断对象的姿态判断,在姿态合格之后,根据与处理对象高度相匹配的目标图像采集装置所拍摄的目标手纹图像来获取处理对象的待处理手纹图像。这种方案通过预览图像采集装置保证快速的姿态判断,并根据处理对象高度确定与其高度匹配的目标图像采集装置,使得本申请实施例的手纹采集设备可以在更大空间范围内、更宽松的手部姿态限制下对手纹清晰成像,从而减少对用户配合度的要求,提高用户体验。According to the non-contact target object handprint collection method and the handprint collection system of the embodiment of the present application, the preview handprint image collected by the preview image collection device is used to judge the posture of the posture judgment object. After the posture is qualified, according to the processing object The handprint image of the object to be processed is obtained by matching the target handprint image captured by the target image acquisition device. This solution guarantees fast posture judgment by previewing the image acquisition device, and determines the target image acquisition device that matches its height according to the height of the processing object, so that the handprint acquisition device in the embodiment of the present application can be used in a larger space and in a more relaxed manner. Handprints are clearly imaged under the limitation of hand posture, thereby reducing the requirements for user cooperation and improving user experience.
示例性地,目标图像采集装置选自可清晰成像物面子空间部分重叠的多个图像采集装置,方法还可以包括:基于预览手纹图像确定处理对象的高度;根据处理对象的高度从多个图像采集装置中选择可清晰成像物面子空间与高度匹配的图像采集装置,作为处理对象对应的目标图像采集装置;获取步骤S3030可以包括:利用目标图像采集装置针对手纹采集区在距离预览手纹图像采集时间指定时间区间内采集目标手纹图像,根据目标手纹图像确定待处理手纹图像;或者,获取目标图像采集装置针对手纹采集区与预览手纹图像同时采集的目标手纹图像,根据目标手纹图像确定待处理手纹图像;或者,获取待处理手纹图像;其中,待处理手纹图像是根据预览手纹图像确定的,预览手纹图像为目标手纹图像。Exemplarily, the target image acquisition device is selected from a plurality of image acquisition devices that can clearly image partly overlapping subspaces of the object surface, and the method may further include: determining the height of the processing object based on the previewed handprint image; In the acquisition device, select an image acquisition device that can clearly image the subspace of the object surface and match the height as the target image acquisition device corresponding to the processing object; the acquisition step S3030 may include: using the target image acquisition device to preview the handprint image at a distance from the handprint collection area Collect target handprint images in the specified time interval of collection time, and determine the handprint images to be processed according to the target handprint images; The target handprint image determines the to-be-processed handprint image; or, the to-be-processed handprint image is acquired; wherein, the to-be-processed handprint image is determined according to the previewed handprint image, and the previewed handprint image is the target handprint image.
在本实施例中,手纹采集系统的图像采集装置的设置方案包括步骤3030中的方案2和方案3。In this embodiment, the setting scheme of the image acquisition device of the handprint collection system includes scheme 2 and scheme 3 in step 3030 .
在本实施例中,手纹采集系统可以包括多个可清晰成像物面子空间部分重叠的图像采集装置(即上述预设图像采集装置),处理对象的目标图像采集装置可以根据处理对象高度从中选出。例如,参见图25实施例中的两个第七图像采集装置2220,这两个图像采集装置的可清晰成像物面子空间是部分重叠的。基于预览手纹图像可以确定处理对象的高度(可单独计算每个处理对象的高度,也可以计算目标对象的高度作为该目标对象所包含的处理对象的高度)。例如,预览手纹图像可以是在结构光光源照射下采集的,可根据预览手纹图像确定预览手纹图像对应的结构光图像。示例性的,结构光光源是单色光源例如红光,预览手纹图像中的红色通道为结构光通道,可作为预览手纹图像对应的结构光图像。根据预览手纹图像对应的结 构光图像(例如预览手纹图像的结构光通道),可以对预览手纹图像中的处理对象进行三维重建。基于三维重建结果,可以获得该处理对象的高度(当目标对象和图像采集装置的连线基本为竖直方向时)。In this embodiment, the handprint acquisition system may include a plurality of image acquisition devices that can clearly image partly overlapped subspaces of the object surface (that is, the above-mentioned preset image acquisition devices), and the target image acquisition device of the processing object may be selected from among them according to the height of the processing object. out. For example, referring to the two seventh image acquisition devices 2220 in the embodiment in FIG. 25 , the subspaces of the two image acquisition devices that can be clearly imaged are partially overlapped. The height of the processing object can be determined based on the previewed handprint image (the height of each processing object can be calculated separately, or the height of the target object can be calculated as the height of the processing objects included in the target object). For example, the preview palmprint image may be collected under the illumination of a structured light source, and the structured light image corresponding to the preview palmprint image may be determined according to the preview palmprint image. Exemplarily, the structured light source is a monochromatic light source such as red light, and the red channel in the preview palmprint image is a structured light channel, which can be used as a structured light image corresponding to the preview palmprint image. According to the corresponding knot of the preview handprint image The light composition image (for example, the structured light channel of the previewed handprint image) can perform three-dimensional reconstruction on the processing object in the previewed handprint image. Based on the 3D reconstruction result, the height of the processed object can be obtained (when the connecting line between the target object and the image acquisition device is basically in the vertical direction).
目标对象包括的处理对象的高度可以相同或不同。目标对象可以包括的处理对象的高度相同,例如情况1,可以计算目标对象所包括的各处理对象的高度并用部分或全部高度计算均值作为各处理对象的最终高度。示例性地,可以计算四个手指各自对应的高度,选择其中高度最大和高度最小分别对应的高度并计算二者的均值,以作为目标对象的高度,也作为各处理对象的高度。目标对象包括的处理对象的高度相同再例如情况2,可以计算目标对象的高度并将其作为处理对象的高度。目标对象包括的处理对象的高度不相同例如情况3,可分别计算各处理对象的高度作为各处理对象的高度。可理解的是,如果在姿态判断步骤S3020中,姿态判断的粒度是整个目标对象(即姿态判断对象为目标对象,姿态判断结果为目标对象的姿态合格或不合格),则在高度计算时,目标对象包括的各处理对象的高度可以单独计算也可整体计算,各处理对象的高度可相同或不同。当各处理对象高度相同时,各处理对象所对应的目标采集装置一定是相同的;当各处理对象具有各自高度时,各处理对象对应的目标采集装置可以是不同的。如果在姿态判断步骤S3020中,姿态判断的粒度是单个处理对象(即姿态判断对象为单个处理对象,姿态判断结果为处理对象的姿态合格或不合格),则在高度计算时,通常情况下各处理对象的高度单独计算,因为此时并不能确定所有处理对象的姿态均合格,只能就姿态合格的处理对象进行高度计算,此时目标对象包括的不同处理对象所对应的目标采集装置可以是不同的。The heights of the processing objects included in the target object may be the same or different. The heights of the processing objects included in the target object may be the same. For example, in Case 1, the heights of each processing object included in the target object may be calculated and the average value of some or all of the heights may be used as the final height of each processing object. For example, the respective heights of the four fingers can be calculated, and the heights corresponding to the largest and smallest heights can be selected and the average value of the two can be calculated as the height of the target object and also as the height of each processing object. The processing objects included in the target object have the same height. For another example in case 2, the height of the target object may be calculated and used as the height of the processing object. The processing objects included in the target object have different heights. For example, in case 3, the height of each processing object may be calculated separately as the height of each processing object. It can be understood that if in the posture judgment step S3020, the granularity of the posture judgment is the entire target object (that is, the posture judgment object is the target object, and the posture judgment result is whether the posture of the target object is qualified or unqualified), then when calculating the height, The heights of the processing objects included in the target object can be calculated individually or as a whole, and the heights of the processing objects can be the same or different. When the heights of the processing objects are the same, the target collection devices corresponding to the processing objects must be the same; when the processing objects have their own heights, the target collection devices corresponding to the processing objects may be different. If in the posture judgment step S3020, the granularity of posture judgment is a single processing object (that is, the posture judgment object is a single processing object, and the posture judgment result is that the posture of the processing object is qualified or unqualified), then when the height is calculated, usually each The height of the processing object is calculated separately, because at this time it is not possible to determine that the attitude of all processing objects is qualified, and only the height of the processing object with a qualified attitude can be calculated. At this time, the target acquisition device corresponding to the different processing objects included in the target object can be different.
下面以预设图像采集装置包含图像采集装置C1和图像采集装置C2为例进行说明。The following description will be made by taking the preset image acquisition device including the image acquisition device C1 and the image acquisition device C2 as an example.
在一个实施例中,姿态判断对象是包含四指的目标对象,且对四指整体进行高度计算。根据预览图像采集装置(C1或C2或者其他图像采集装置)采集的预览手纹图像确定目标对象姿态合格。之后,根据预览手纹图像中对应的结构光图像确定四指整体的高度(例如,分别计算食指、中指、无名指、小指的高度并取均值作为各处理对象(单指)的高度)。接着,根据处理对象的高度选择目标采集装置C1,由于各处理对象对应的高度相同,各处理对象对应的目标采集装置均为C1。之后,在T1时刻(T1距离预览图像采集时间指定时间间隔内),通过目标图像采集装置C1采集目标手纹图像,并对目标手纹图像进行手指分割获得每个处理对象(单个手指)所对应的待处理手纹图像。处理对象及其对应的目标手纹图像见表1。In one embodiment, the posture determination object is a target object including four fingers, and the height calculation is performed on the four fingers as a whole. According to the preview handprint image collected by the preview image collection device (C1 or C2 or other image collection device), it is determined that the posture of the target object is qualified. After that, determine the overall height of the four fingers according to the corresponding structured light image in the preview palmprint image (for example, calculate the heights of the index finger, middle finger, ring finger, and little finger respectively and take the average as the height of each processing object (single finger)). Next, the target collection device C1 is selected according to the height of the processing object. Since each processing object corresponds to the same height, the target collection device corresponding to each processing object is C1. Afterwards, at time T1 (within the specified time interval from T1 to the preview image collection time), the target handprint image is collected by the target image collection device C1, and finger segmentation is performed on the target handprint image to obtain the corresponding fingerprint of each processing object (single finger). The handprint image to be processed. The processing objects and their corresponding target handprint images are shown in Table 1.
表1

Table 1

在另一个实施例中,姿态判断对象是包含四指的目标对象,且对目标对象中的每个手指分别进行高度计算。根据预览图像采集装置(C1或C2或者其他图像采集装置)采集的预览手纹图像确定目标对象姿态合格。之后,根据预览手纹图像中对应的结构光图像分别确定四指各自的高度(例如,分别计算食指、中指、无名指、小指的高度)。接着,根据处理对象的高度选择其各自对应的目标采集装置(例如食指和小指对应的目标采集装置为C1,中指和无名指对应的目标采集装置为C2)。之后,在T1时刻(T1距离预览图像采集时间指定时间间隔内),通过目标图像采集装置C1采集手纹图像,作为食指和小指的目标手纹图像,通过目标图像采集装置C2采集手纹图像,作为中指和无名指的目标手纹图像,并对各处理对象的目标手纹图像进行手指分割,获得该处理对象(单个手指)所对应的待处理手纹图像。处理对象及其对应的目标手纹图像见表2。In another embodiment, the gesture determination object is a target object including four fingers, and the height calculation is performed on each finger in the target object. According to the preview handprint image collected by the preview image collection device (C1 or C2 or other image collection device), it is determined that the posture of the target object is qualified. Afterwards, the heights of the four fingers are respectively determined according to the corresponding structured light images in the previewed palmprint image (for example, the heights of the index finger, middle finger, ring finger, and little finger are respectively calculated). Next, according to the height of the object to be processed, select the corresponding target acquisition devices (for example, the target acquisition device corresponding to the index finger and little finger is C1, and the target acquisition device corresponding to the middle finger and ring finger is C2). Afterwards, at T1 moment (T1 is within the specified time interval from the preview image acquisition time), the handprint image is collected by the target image acquisition device C1, as the target handprint image of the index finger and the little finger, and the handprint image is collected by the target image acquisition device C2, The target palmprint images of the middle finger and the ring finger are used, and finger segmentation is performed on the target palmprint images of each processing object to obtain the to-be-processed palmprint image corresponding to the processing object (single finger). The processing objects and their corresponding target handprint images are shown in Table 2.
表2
Table 2
在又一个实施例中,姿态判断对象可以是四指中的每个手指,且对每个手指进行高度计算。预览图像采集装置(例如是C1)采集预览手纹图像的同时,另一图像采集装置C2也在采集图像。根据预览图像采集装置采集的预览手纹图像确定处理对象姿态合格(例如,T1采集的预览图像中,判断食指合格、其他手指不合格;T2采集的预览图像中,判断中指和无名指合格、小指不合格;T3采集的预览图像中,判断小指合格)。之后,根据处理对象合格的预览手纹图像中对应的结构光图像分别确定四指各自的高度(例如,从T1采集的预览手纹图像中,计算食指高度;从T2采集的预览手纹图像中,计算中指和无名指高度;从T2采集的预览手纹图像中,计算小指的高度)。接着,根据处理对象的高度选择其各自对应的目标采集装置(例如食指、无名指和小指对应的目标采集装置为C1,中指对应的目标采集装置为C2)。之后,将T1时刻C1拍摄的预览手纹图像作为食指的目标指纹图像,将T2时刻C2拍摄的手纹图像作为中指的目标指纹图像, 将T2时刻C1拍摄的预览手纹图像作为无名指的目标手纹图像,将T3时刻C1拍摄的预览手纹图像作为小指的目标手纹图像。处理对象及其对应的目标手纹图像见表3。In yet another embodiment, the posture judgment object may be each of the four fingers, and the height calculation is performed on each finger. While the preview image acquisition device (for example, C1 ) is acquiring the preview image of the handprint, another image acquisition device C2 is also acquiring images. According to the preview handprint image collected by the preview image collection device, it is determined that the posture of the processing object is qualified (for example, in the preview image collected by T1, the index finger is judged to be qualified, and other fingers are not qualified; in the preview image collected by T2, the middle finger and ring finger are judged to be qualified, and the little finger is not qualified. Qualified; in the preview image collected by T3, the little finger is judged to be qualified). After that, determine the heights of the four fingers respectively according to the corresponding structured light images in the qualified preview handprint image of the processing object (for example, calculate the height of the index finger from the preview handprint image collected by T1; calculate the height of the index finger from the preview handprint image collected by T2; , calculate the height of the middle finger and ring finger; calculate the height of the little finger from the preview handprint image collected by T2). Next, select the target acquisition devices corresponding to them according to the height of the processing object (for example, the target acquisition device corresponding to the index finger, ring finger and little finger is C1, and the target acquisition device corresponding to the middle finger is C2). Afterwards, the preview handprint image taken at T1 time C1 is used as the target fingerprint image of the index finger, and the handprint image taken at T2 time C2 is taken as the target fingerprint image of the middle finger, The preview palmprint image captured at T2 time C1 is used as the target palmprint image of the ring finger, and the preview palmprint image captured at T3 time C1 is used as the target palmprint image of the little finger. The processing objects and their corresponding target handprint images are shown in Table 3.
表3
table 3
相应的,获取步骤S3030可以包括以下步骤。Correspondingly, the obtaining step S3030 may include the following steps.
在一个实施例中,如果经过姿态判断步骤S3020,基于预览手纹图像确定目标对象在预览手纹图像采集时间的姿态为合格,那么步骤S3030包括利用目标图像采集装置,在距离当前预览手纹图像采集时间的指定时间区间内,针对手纹采集区中的目标对象进行图像采集,以获得目标手纹图像。随后根据采集的目标手纹图像确定待处理手纹图像。例如,预览图像采集装置是C1,可清晰成像子空间重叠的图像采集装置为C1和C2,根据处理对象高度选择处理对象的目标图像采集装置是C2,此时相当于用不同于预览图像采集装置的新的图像采集装置重新采集目标手纹图像。又例如,预览图像采集装置是C1,可清晰成像子空间重叠的图像采集装置为C1和C2,根据处理对象高度选择处理对象的目标图像采集装置是C1,此时预览图像采集装置和目标图像采集装置虽然相同,但是预览时采集的预览手纹图像的分辨率可能是不够的,因此可以选择重新采集目标手纹图像。再例如,预览图像采集装置是C1,可清晰成像子空间重叠的图像采集装置是C2和C3,根据处理对象高度选择处理对象的目标图像采集装置是C3,此时也是相当于用不同于预览图像采集装置的新的图像采集装置重新采集目标手纹图像。In one embodiment, if after the posture judging step S3020, it is determined that the posture of the target object at the preview handprint image acquisition time is qualified based on the preview handprint image, then step S3030 includes using the target image acquisition device to measure the distance between the current preview handprint image Within the specified time interval of the collection time, image collection is performed on the target object in the handprint collection area to obtain a target handprint image. Subsequently, the handprint image to be processed is determined according to the collected target handprint image. For example, the preview image acquisition device is C1, and the image acquisition devices that can clearly image overlapping subspaces are C1 and C2, and the target image acquisition device that selects the processing object according to the height of the processing object is C2, which is equivalent to using a different preview image acquisition device. The new image acquisition device re-acquires the target handprint image. For another example, the preview image acquisition device is C1, and the image acquisition devices that can clearly image subspace overlapping are C1 and C2, and the target image acquisition device that selects the processing object according to the height of the processing object is C1. At this time, the preview image acquisition device and the target image acquisition Although the devices are the same, the resolution of the preview handprint image collected during preview may not be sufficient, so you can choose to re-acquire the target handprint image. For another example, the preview image acquisition device is C1, and the image acquisition devices that can clearly image subspace overlapping are C2 and C3, and the target image acquisition device that selects the processing object according to the height of the processing object is C3. The new image acquisition device of the acquisition device re-acquires the target handprint image.
在另一个实施例中,步骤S3030包括获取目标图像采集装置针对手纹采集区与预览指纹图像同时采集的目标手纹图像,根据目标手纹图像确定待处理手纹图像。也就是说,在预览图像采集装置针对手纹采集区采集预览手纹图像的同时,利用其他图像采集装置同时针对手纹采集区采集目标手纹图像。例如,预览图像采集装置是C1,可清晰成像子空间重叠的图像采集装置为C1和C2,在C1采集预览手纹图像的同时也用C2采集手纹图像,根据预览手纹图像确定出处理对象的目标图像采集装置是C2,此时无需用C2重新采集目标手纹图像,因为C1采集预览手纹图像时C2采集的手纹图像可作为目标手纹图像。虽然先前采集了目标手纹图像但是没有确定待处理手纹图像(例如通过对目标手纹图像进行分割),所以可以在获取 步骤S3030中根据目标手纹图像确定待处理手纹图像。In another embodiment, step S3030 includes acquiring a target handprint image collected by the target image collection device at the same time as the preview fingerprint image in the handprint collection area, and determining the handprint image to be processed according to the target handprint image. That is to say, when the preview image acquisition device acquires the preview handprint image for the handprint acquisition area, other image acquisition devices are used to simultaneously acquire the target handprint image for the handprint acquisition area. For example, the preview image acquisition device is C1, and the image acquisition devices that can clearly image overlapping subspaces are C1 and C2. When C1 collects the preview handprint image, C2 also uses C2 to collect the handprint image, and determines the processing object according to the preview handprint image. The target image acquisition device is C2. At this time, there is no need to use C2 to re-acquire the target handprint image, because the handprint image collected by C2 can be used as the target handprint image when C1 collects and previews the handprint image. Although the target handprint image has been collected previously, the handprint image to be processed has not been determined (for example, by segmenting the target handprint image), so it can be acquired In step S3030, the handprint image to be processed is determined according to the target handprint image.
在又一个实施例中,步骤S3030包括直接获取待处理手纹图像,其中待处理手纹图像是根据预览手纹图像确定的。此时,根据预览手纹图像判断处理对象姿态合格、根据处理对象高度判断处理对象的目标手纹图像为预览手纹图像,并且预览手纹图像的分辨率是足够的,可以无需重新采集新的图像作为目标手纹图像。即,目标手纹图像就是预览手纹图像,由于姿态判断时已经对预览手纹对象进行了分割用于姿态判断,此时可直接获取姿态判断时进行分割过的单指图像作为处理对象的待处理手纹图像。In yet another embodiment, step S3030 includes directly acquiring the handprint image to be processed, wherein the handprint image to be processed is determined according to the previewed handprint image. At this time, the attitude of the processing object is judged to be qualified according to the preview palmprint image, and the target palmprint image of the processing object is judged to be the preview palmprint image according to the height of the processing object, and the resolution of the preview palmprint image is sufficient, so there is no need to re-acquire a new one. image as the target handprint image. That is, the target handprint image is the preview handprint image. Since the preview handprint object has been segmented for gesture judgment during posture judgment, the single-finger image segmented during posture judgment can be directly obtained as the processing object. Processing handprint images.
根据上述技术方案,可以通过不同的方式获取待处理手纹图像,以满足不同应用场景的需求,因此这种方案具有较高的灵活性,能够平衡图像质量和时延、以及对用户配合度的要求。According to the above technical solution, the handprint images to be processed can be obtained in different ways to meet the needs of different application scenarios. Therefore, this solution has high flexibility and can balance image quality and time delay, as well as the need for user cooperation. Require.
示例性地,预览图像采集装置是预览图像采集装置和多个图像采集装置组成的图像采集装置集合中焦距最短的图像采集装置。Exemplarily, the preview image acquisition device is an image acquisition device with the shortest focal length in the image acquisition device set composed of the preview image acquisition device and a plurality of image acquisition devices.
通常会选取视野/景深最大的图像采集装置作为预览图像采集装置。Usually, the image acquisition device with the largest field of view/depth of field is selected as the preview image acquisition device.
在预览图像采集装置为多个图像采集装置之一的情况下,图像采集装置集合即为多个图像采集装置。例如,预览图像采集装置为图像采集装置C1,目标图像采集装置为图像采集装置C1和C2,那么图像采集装置集合可以包括图像采集装置C1和C2。在预览图像采集装置不是多个图像采集装置之一的情况下,图像采集装置集合包括预览图像采集装置和多个图像采集装置。例如,预览图像采集装置为图像采集装置C3,目标图像采集装置为图像采集装置C1和C2,那么图像采集装置集合可以包括图像采集装置C1、C2和C3。In the case that the preview image capture device is one of multiple image capture devices, the set of image capture devices is a plurality of image capture devices. For example, if the preview image acquisition device is image acquisition device C1 and the target image acquisition devices are image acquisition devices C1 and C2, then the set of image acquisition devices may include image acquisition devices C1 and C2. In a case where the preview image capture device is not one of the plurality of image capture devices, the set of image capture devices includes the preview image capture device and the plurality of image capture devices. For example, if the preview image capture device is image capture device C3, and the target image capture devices are image capture devices C1 and C2, then the set of image capture devices may include image capture devices C1, C2 and C3.
示例性而非限制性地,图像采集装置集合中各图像采集装置的镜头可以设置在同一高度,但是各自对应的焦距不同。例如,两个图像采集装置C1和C2的焦距可以分别是6毫米(mm)和8毫米。预览图像采集装置可以是焦距为6mm的图像采集装置C1。如此设置,可以基于短焦距的图像采集装置,针对目标对象采集更大视野的预览手纹图像,以便尽量采集到完整的目标对象。As an example and not a limitation, the lenses of the image capture devices in the image capture device set may be set at the same height, but their corresponding focal lengths are different. For example, the focal lengths of the two image capture devices C1 and C2 may be 6 millimeters (mm) and 8 mm, respectively. The preview image acquisition device may be an image acquisition device C1 with a focal length of 6 mm. With such an arrangement, based on the image acquisition device with a short focal length, a preview handprint image with a larger field of view can be collected for the target object, so as to collect the complete target object as much as possible.
示例性地,目标手纹图像是在结构光光源和非结构光光源照射下采集的,待处理手纹图像包括结构光通道和非结构光通道,处理步骤可以包括:对于每个处理对象,根据该处理对象对应的待处理手纹图像的结构光通道确定该处理对象的三维信息,根据三维信息对该处理对象对应的待处理手纹图像的非结构光通道进行展开变换,得到该处理对象对应的展开图像;根据展开图像得到该处理对象对应的模拟捺印图像。Exemplarily, the target handprint image is collected under the illumination of a structured light source and an unstructured light source, and the handprint image to be processed includes a structured light channel and an unstructured light channel, and the processing steps may include: for each processing object, according to The structured light channel of the handprint image to be processed corresponding to the processing object determines the three-dimensional information of the processing object, and performs expansion transformation on the unstructured light channel of the handprint image to be processed corresponding to the processing object according to the three-dimensional information, to obtain the corresponding The expanded image; according to the expanded image, the simulated imprint image corresponding to the processing object is obtained.
在一个实施例中,目标手纹图像可以是在结构光光源和非结构光光源(例如可见光光源)照射下同时采集的,此时目标手纹图像对应有结构光图像和非结构光图像。示例性的,结构光光源和非结构光光源(例如是照 明光源)是不同颜色时,目标手纹图像可以具有结构光通道和非结构光通道(可分别作为目标手纹图像对应的结构光图像和非结构光图像),相应的待处理手纹图像也可以包括结构光通道和非结构光通道。根据处理对象所对应的待处理手纹图像的结构光通道,可以对该处理对象进行三维重建,以确定该处理对象的三维信息。三维信息可以是处理对象上的各个点在世界坐标系中的三维世界坐标。示例性地,在结构光为条带光的情况下,可以基于待处理手纹图像的结构光通道下的各个条带确定世界坐标系与图像坐标系之间的转换关系,并基于该转换关系将处理对象上的各个点的二维图像坐标转换为三维世界坐标,获得处理对象的三维信息。本领域技术人员可以理解三维重建的实现方式,本文不赘述。In one embodiment, the target handprint image can be collected simultaneously under the illumination of a structured light source and an unstructured light source (such as a visible light source), and at this time, the target handprint image corresponds to the structured light image and the unstructured light image. Exemplarily, structured light sources and unstructured light sources (for example, lighting When the bright light source) is of different colors, the target handprint image can have a structured light channel and an unstructured light channel (which can be respectively used as a structured light image and an unstructured light image corresponding to the target handprint image), and the corresponding handprint image to be processed also It can include structured light channels and unstructured light channels. According to the structured light channel of the to-be-processed handprint image corresponding to the processing object, three-dimensional reconstruction can be performed on the processing object to determine the three-dimensional information of the processing object. The three-dimensional information may be the three-dimensional world coordinates of each point on the processing object in the world coordinate system. Exemplarily, when the structured light is striped light, the conversion relationship between the world coordinate system and the image coordinate system can be determined based on each strip under the structured light channel of the handprint image to be processed, and based on the conversion relationship The two-dimensional image coordinates of each point on the processing object are converted into three-dimensional world coordinates to obtain the three-dimensional information of the processing object. Those skilled in the art can understand the implementation of the three-dimensional reconstruction, which will not be described in detail herein.
根据三维信息可以对处理对象所对应的待处理手纹图像的非结构光通道进行展开变换,得到该处理对象对应的展开图像,对展开图像进行进一步处理(例如图像增强)可以得到该处理对象对应的模拟捺印图像。According to the three-dimensional information, the unstructured light channel of the to-be-processed handprint image corresponding to the processing object can be expanded and transformed to obtain the expanded image corresponding to the processed object. Further processing (such as image enhancement) on the expanded image can obtain the corresponding simulated imprinted image.
根据上述技术方案,通过处理对象对应的待处理手纹图像的结构光通道获得处理对象的三维信息,并基于三维信息对该处理对象对应的待处理手纹图像的非结构光通道进行展开变换,根据展开图像获得对应的模拟捺印图像。该方法获得的非接触采集模拟捺印图像更加符合用户用捺印采集到的手纹,在用模拟捺印图像与捺印指纹图像进行指纹比对时,能够获得更高的比对精度。According to the above technical solution, the three-dimensional information of the processing object is obtained through the structured light channel of the handprint image to be processed corresponding to the processing object, and the unstructured light channel of the handprint image to be processed corresponding to the processing object is expanded and transformed based on the three-dimensional information, According to the expanded image, the corresponding simulated imprinted image is obtained. The non-contact collected simulated imprinting image obtained by the method is more in line with the handprint collected by the user's imprinted, and higher comparison accuracy can be obtained when the simulated imprinted image and the imprinted fingerprint image are used for fingerprint comparison.
示例性地,基于预览手纹图像确定姿态判断对象的姿态是否合格可以包括:基于预览手纹图像确定姿态判断对象中包含的处理对象的第一姿态指标,第一姿态指标选自姿态指标。Exemplarily, determining whether the gesture of the gesture judging object is qualified based on the previewed palmprint image may include: determining a first gesture index of the processing object contained in the gesture judging object based on the previewed palmprint image, where the first posture index is selected from the posture index.
在一个实施例中,第一姿态指标可以表示需要针对每个处理对象分别进行判断(即以单个处理对象为粒度进行判断)的指标,其选自姿态指标。例如,姿态判断对象为处理对象(如单指),每个处理对象的所有姿态指标都可以单独判断。例如,姿态判断对象为目标对象(如四指整体),每个手指的俯仰角可以单独判断,横滚角可以四指整体判断,此时,第一姿态指标包括俯仰角而不包括横滚角。可以通过判断姿态判断对象中包含的处理对象的第一姿态指标是否合格,来确定姿态判断对象的姿态是否合格。在姿态判断对象是目标对象的情况下,在根据姿态指标判断姿态判断对象的姿态是否合格时,有些姿态指标可以是整体计算的,姿态判断对象整体的姿态指标合格,目标对象的该姿态指标就合格;有些姿态指标可以是单个处理对象计算的,所有处理对象的姿态指标合格才确定目标对象的姿态合格。可选地,对于某些姿态指标来说,可以用单个处理对象的姿态指标作为姿态判断对象整体的姿态指标的代表。In an embodiment, the first posture index may represent an index that needs to be judged separately for each processing object (that is, judgment is performed at a granularity of a single processing object), and it is selected from posture indexes. For example, the posture judgment object is a processing object (such as a single finger), and all posture indicators of each processing object can be judged separately. For example, if the attitude judgment object is a target object (such as four fingers as a whole), the pitch angle of each finger can be judged separately, and the roll angle can be judged by four fingers as a whole. At this time, the first attitude index includes the pitch angle and does not include the roll angle . Whether the posture of the posture judging object is qualified can be determined by judging whether the first posture index of the processing object included in the posture judging object is qualified. In the case that the attitude judgment object is the target object, when judging whether the attitude of the attitude judgment object is qualified according to the attitude index, some attitude indexes can be calculated as a whole. Qualified; some attitude indicators can be calculated by a single processing object, and the attitude of the target object is determined to be qualified only when the attitude indicators of all the processing objects are qualified. Optionally, for some attitude indicators, the attitude indicator of a single processing object may be used as a representative of the overall attitude indicator of the attitude judgment object.
根据上述技术方案,可以单独确定姿态判断对象中包含的各处理对象的第一姿态指标,再根据第一姿态指标判断姿态判断对象的姿态是否合格。 这种方案实现灵活,有助于保证姿态判断结果的准确性。According to the above technical solution, it is possible to separately determine the first posture index of each processing object included in the posture judgment object, and then judge whether the posture of the posture judgment object is qualified or not according to the first posture index. This solution is flexible and helps to ensure the accuracy of attitude judgment results.
示例性地,基于预览手纹图像确定姿态判断对象中包含的处理对象的第一姿态指标,可以包括以下至少一者:根据预览手纹图像结构光通道中处理对象所对应的第一结构光重复单元和第二结构光重复单元,确定处理对象的俯仰角或横滚角;预览手纹图像是在结构光光源照射下采集的,预览手纹图像可以包括结构光通道;根据预览手纹图像结构光通道中处理对象所对应的第三结构光重复单元的两个条带点的高度,确定处理对象的横滚角或俯仰角;预览手纹图像是在结构光光源照射下采集的,预览手纹图像包括结构光通道;根据预览手纹图像非结构光通道中处理对象的位置和/或形状,确定该处理对象的位置和/或偏航角;预览手纹图像是在非结构光光源照射下采集的,预览手纹图像包括非结构光通道。Exemplarily, determining the first gesture index of the processing object included in the gesture judgment object based on the previewed handprint image may include at least one of the following: repeating the first structured light corresponding to the processing object in the structured light channel of the previewed handprint image The unit and the second structured light repeating unit determine the pitch angle or roll angle of the object to be processed; the preview handprint image is collected under the illumination of a structured light source, and the preview handprint image can include a structured light channel; according to the preview handprint image structure The height of the two strip points of the third structured light repeating unit corresponding to the processing object in the optical channel determines the roll angle or pitch angle of the processing object; the preview handprint image is collected under the illumination of a structured light source, and the preview handprint image The fingerprint image includes a structured light channel; according to the position and/or shape of the processing object in the unstructured light channel of the preview handprint image, determine the position and/or yaw angle of the processing object; the preview palmprint image is illuminated by an unstructured light source Under acquisition, the preview handprint image includes an unstructured light channel.
预览手纹图像可以是在结构光光源照射下采集的,因此预览手纹图像可以包括结构光通道。在一个实施例中,结构光光源可以为条带光,本实施例中的第一结构光重复单元、第二结构光重复单元、第三结构光重复单元和下文实施例中的第四结构光重复单元、第五结构光重复单元以及第六结构光重复单元均可以表示条带光中的任一条带。在预览手纹图像中,如果第一结构光重复单元和第二结构光重复单元沿垂直于任一处理对象(例如单个手指)轴线的方向排列,则可以用于确定该处理对象的俯仰角。例如,可以确定处理对象所对应的第一结构光重复单元和第二结构光重复单元之间的高度差,用该高度差表示处理对象的俯仰角。示例性地,如果该高度差小于第一高度阈值,则可以确定该处理对象的俯仰角合格,否则确定该处理对象的俯仰角不合格。第一高度阈值可以根据需要设定为任意合适的大小。示例性地,图31A示出了根据本申请一个实施例的结构光通道图像的示意图。需注意,图31A所示的结构光通道图像通过向处理对象(单个手指)照射结构光生成,该结构光包含条带结构光和散点结构光两种类型的结构光,进而能够在手指上形成条带和散点两种重复单元。本文描述的结构光重复单元主要指的是条带结构光中的各条带。诸如图31A那样的在条带内形成的散点可以不考虑。参见图31A,示出第一结构光重复单元和第二结构光重复单元,假设二者之间的高度差为H1,如果该高度差H1小于第一高度阈值,则可以确定该处理对象的俯仰角合格,否则确定该处理对象的俯仰角不合格。如果第一结构光重复单元和第二结构光重复单元沿平行于任一处理对象(例如单个手指)轴线的方向延伸,则可以用于确定该处理对象的横滚角。例如,可以确定处理对象所对应的第一结构光重复单元和第二结构光重复单元之间的高度差,用该高度差表示处理对象的横滚角,如图31B所示。图31B示出根据本申请另一个实施例的结构光中的结构光重复单元的排列示意图。可以理解,图31B仅示出了平行于处理对象轴线的方向排列的第一结构光重复单元和第二结构光重复单元,手指并未示出。但是图31B所对应的手指可以理解为与图31A所示的手指的摆 放位置一致,使得图31B所示的结构光重复单元能够沿平行于该手指轴线的方向排列。例如,可以计算第一结构光重复单元和第二结构光重复单元之间的高度差,如果该高度差小于第二高度阈值,则可以确定该处理对象的横滚角合格,否则确定该处理对象的横滚角不合格。第二高度阈值可以根据需要设定为任意合适的大小。结构光重复单元为条带的情况下,该条带的高度可以用条带上某个点的高度或某几个点高度的平均值表示,例如用条带的中点的高度表示。The preview handprint image may be collected under the illumination of a structured light source, so the preview handprint image may include a structured light channel. In one embodiment, the structured light source can be strip light, the first structured light repeating unit in this embodiment, the second structured light repeating unit, the third structured light repeating unit and the fourth structured light in the following embodiments Each of the repeating unit, the fifth structured light repeating unit and the sixth structured light repeating unit may represent any strip in the strip light. In the preview image of the handprint, if the first structured light repeating unit and the second structured light repeating unit are arranged along a direction perpendicular to the axis of any processing object (such as a single finger), they can be used to determine the pitch angle of the processing object. For example, the height difference between the first structured light repeating unit and the second structured light repeating unit corresponding to the processing object may be determined, and the height difference is used to represent the elevation angle of the processing object. Exemplarily, if the height difference is smaller than the first height threshold, it may be determined that the elevation angle of the processing object is qualified; otherwise, it is determined that the elevation angle of the processing object is unqualified. The first height threshold can be set to any suitable value as required. Exemplarily, FIG. 31A shows a schematic diagram of a structured light channel image according to an embodiment of the present application. It should be noted that the structured light channel image shown in Figure 31A is generated by irradiating structured light to the processing object (single finger). Both band and scatter repeat units are formed. The structured light repeating unit described herein mainly refers to each strip in the strip structured light. Scatter points formed within the stripes such as that in FIG. 31A can be ignored. Referring to Fig. 31A, it shows the first structured light repeating unit and the second structured light repeating unit, assuming that the height difference between them is H 1 , if the height difference H 1 is smaller than the first height threshold, the processing object can be determined The pitch angle of the processing object is determined to be qualified, otherwise it is determined that the pitch angle of the processing object is unqualified. If the first structured light repeating unit and the second structured light repeating unit extend along a direction parallel to the axis of any processing object (for example, a single finger), they can be used to determine the roll angle of the processing object. For example, the height difference between the first structured light repeating unit and the second structured light repeating unit corresponding to the processing object may be determined, and the height difference is used to represent the roll angle of the processing object, as shown in FIG. 31B . Fig. 31B shows a schematic diagram of arrangement of structured light repeating units in structured light according to another embodiment of the present application. It can be understood that FIG. 31B only shows the first structured light repeating unit and the second structured light repeating unit arranged in a direction parallel to the axis of the object to be treated, and the fingers are not shown. But the finger corresponding to Figure 31B can be understood as the swing of the finger shown in Figure 31A The placement positions are consistent, so that the structured light repeating units shown in FIG. 31B can be arranged in a direction parallel to the axis of the finger. For example, the height difference between the first structured light repeating unit and the second structured light repeating unit can be calculated, if the height difference is less than the second height threshold, it can be determined that the roll angle of the processing object is qualified, otherwise it can be determined that the processing object roll angle failed. The second height threshold can be set to any suitable value as required. When the structured light repeating unit is a strip, the height of the strip can be expressed by the height of a certain point on the strip or the average height of several points, for example, by the height of the midpoint of the strip.
在预览手纹图像中,如果第三结构光重复单元沿垂直于任一处理对象(例如单个手指)轴线的方向延伸,则可以根据第三结构光重复单元的两个条带点的高度确定该处理对象的横滚角。如果第三结构光重复单元沿平行于任一处理对象(例如单个手指)轴线的方向延伸,则可以根据第三结构光重复单元的两个条带点的高度确定该处理对象的俯仰角。示例性地,第三结构光重复单元为条带的情况下,可以在第三结构光重复单元上任选两个相距较远的条带点。对于第三结构光重复单元沿垂直于任一处理对象的轴线的方向延伸的情况,可以通过类似方式确定俯仰角,即可以计算这两个条带点之间的高度差,用该高度差表示处理对象的横滚角。图32A示出了根据本申请另一个实施例的结构光图像的示意图,如图32A所示,通过计算第三结构光重复单元上第一条带点a和第二条带点b之间的高度差,如果该高度差小于第三高度预设阈值,则可以确定该手指的横滚角合格,否则确定该处理对象的横滚角不合格。第三高度阈值可以根据需要设定为任意合适的大小,例如4mm。同理地,对于第三结构光重复单元沿平行于任一处理对象的轴线的方向延伸的情况,可以计算第三结构光重复单元上的两个条带点之间的高度差,用该高度差表示处理对象的俯仰角,如图32B所示。图32B示出根据本申请另一个实施例的结构光中的结构光重复单元的排列示意图。可以理解,图32B仅示出了平行于处理对象轴线的方向排列的第三结构光重复单元,手指并未示出。但是图32B所对应的手指可以理解为与图32A所示的手指的摆放位置一致,使得图32B所示的结构光重复单元能够沿平行于该手指轴线的方向排列。参见图32B,如果任一第三结构光重复单元上的两个条带点a’和b’之间的高度差小于第四高度预设阈值,则可以确定该手指的俯仰角合格,否则确定该处理对象的俯仰角不合格。第四高度阈值可以根据需要设定为任意合适的大小,例如4mm。In the preview handprint image, if the third structured light repeating unit extends along a direction perpendicular to the axis of any processing object (such as a single finger), the height of the two stripe points of the third structured light repeating unit can be determined. Handles the roll angle of the object. If the third structured light repeating unit extends along a direction parallel to the axis of any processing object (such as a single finger), the pitch angle of the processing object can be determined according to the heights of the two stripe points of the third structured light repeating unit. Exemplarily, when the third structured light repeating unit is a stripe, two farther apart stripe points can be selected on the third structured light repeating unit. For the case where the third structured light repeating unit extends along a direction perpendicular to the axis of any treatment object, the pitch angle can be determined in a similar manner, that is, the height difference between these two stripe points can be calculated, expressed by the height difference Handles the roll angle of the object. Fig. 32A shows a schematic diagram of a structured light image according to another embodiment of the present application. As shown in Fig. 32A, by calculating the distance between the first strip point a and the second strip point b on the third structured light repeating unit height difference, if the height difference is less than the third height preset threshold, it can be determined that the roll angle of the finger is qualified; otherwise, it is determined that the roll angle of the processing object is unqualified. The third height threshold can be set to any suitable size as required, for example, 4mm. Similarly, for the case where the third structured light repeating unit extends along a direction parallel to the axis of any processing object, the height difference between the two stripe points on the third structured light repeating unit can be calculated, using the height The difference represents the pitch angle of the processing object, as shown in Fig. 32B. Fig. 32B shows a schematic diagram of arrangement of structured light repeating units in structured light according to another embodiment of the present application. It can be understood that FIG. 32B only shows the third structured light repeating unit arranged in a direction parallel to the axis of the treatment object, and the fingers are not shown. However, the finger corresponding to FIG. 32B can be understood as being placed in the same position as the finger shown in FIG. 32A , so that the structured light repeating units shown in FIG. 32B can be arranged in a direction parallel to the axis of the finger. Referring to FIG. 32B, if the height difference between the two stripe points a' and b' on any third structured light repeating unit is smaller than the fourth height preset threshold, it can be determined that the pitch angle of the finger is qualified, otherwise it can be determined The pitch angle of the processing object is unqualified. The fourth height threshold can be set to any suitable size as required, for example, 4mm.
根据预览手纹图像非结构光通道(可以称为非结构光通道图像)中处理对象的位置和/或形状,可以确定该处理对象的位置和/或偏航角。例如,根据获得的预览手纹图像的非结构光通道图像,可以对该非结构光通道图像进行分割,确定各处理对象的掩膜(mask)或包络。对该非结构光通道图像进行图像分割可以采用任意合适的现有或将来可能出现的图像分割网络模型实现。例如,图像分割网络模型可以包括以下一种或多种:全卷积网络(Fully Convolutional Networks,FCN)、U型网络(Unet)、深度实验 室(DeepLab)系列、V型网络(Vnet)等网络模型。示例性地,当前非结构光通道图像中的目标对象中包含两个处理对象,例如用户的双拇指。基于图像分割,可以获得该非结构光通道图像中两个拇指的掩膜或包络。通过两个拇指的掩膜或包络,可以确定两个拇指在该非结构光通道图像中的位置和/或形状,进而可以判断该处理对象的位置和/或偏航角是否合格。According to the position and/or shape of the processing object in the unstructured light channel (which may be referred to as an unstructured light channel image) of the previewed handprint image, the position and/or yaw angle of the processing object can be determined. For example, according to the obtained unstructured light channel image of the previewed handprint image, the unstructured light channel image may be segmented to determine a mask or envelope of each processing object. The image segmentation of the unstructured light channel image can be implemented by using any suitable existing or future image segmentation network model. For example, the image segmentation network model may include one or more of the following: fully convolutional network (Fully Convolutional Networks, FCN), U-shaped network (Unet), depth experiment Room (DeepLab) series, V-shaped network (Vnet) and other network models. Exemplarily, the target object in the current unstructured light channel image includes two processing objects, such as the user's double thumbs. Based on the image segmentation, the mask or envelope of the two thumbs in the unstructured light channel image can be obtained. Through the masks or envelopes of the two thumbs, the positions and/or shapes of the two thumbs in the unstructured light channel image can be determined, and then whether the position and/or yaw angle of the processing object is qualified can be judged.
示例性地,可以用任一处理对象(例如左手拇指)的掩膜和/包络的位置表示该处理对象的位置,并用任一处理对象(例如左手拇指)的掩膜和/包络的形状表示该处理对象的形状。图33示出了根据本申请一个实施例的非结构光通道图像中处理对象的示意图。在一个示例中,可以判断任一处理对象的位置(即其掩膜和/或包络的位置)是否处于当前非结构光通道图像的中心区域。中心区域的大小可以根据需要设定,其至少包含非结构光通道图像的中心点。如果任一处理对象(例如左手拇指)处于该非结构光通道图像的中心区域内,则可以确定其位置合格,否则确定其位置不合格。如图33所示,点r为该非结构光通道图像的中心点。根据该处理对象的包络的位置,可以判断该处理对象的位置不处于当前非结构光通道图像的中心区域,所以其位置不合格。在另一个示例中,对于不同的手指,可以设置不同的位置判断标准。例如,在目标对象为四指的情况下,可以判断中指的顶端(可以是中指的掩膜或包络的顶端)是否高于非结构光通道图像上的第一高度位置,如果是,则确定中指的位置不合格;还可以判断小手指的顶端(可以是小手指的掩膜或包络的顶端)是否低于非结构光通道图像上的第二高度位置,如果是,则确定小手指的位置不合格;还可以判断四指中位于最左侧手指上的最左点(可以是最左侧手指的掩膜或包络的最左点)是否位于非结构光通道图像的第一宽度位置的左侧,如果是,则确定最左侧手指的位置不合格;还可以判断四指中位于最右侧手指上的最左点(可以是最左侧手指的掩膜或包络的最左点)是否位于非结构光通道图像的第二宽度位置的右侧,如果是,则确定最右侧手指的位置不合格。第一高度位置、第二高度位置、第一宽度位置和第二宽度位置均可以根据需要设定。在一个示例中,第一高度位置可以为距离非结构光通道图像的上边缘0.1H处,第二高度位置可以为距离非结构光通道图像的下边缘0.3H处,第一宽度位置可以为距离非结构光通道图像的左边缘0.1W处,第二宽度位置可以为距离非结构光通道图像的右边缘0.1W处,H为非结构光通道的整个高度,W为非结构光通道的整个宽度。Exemplarily, the position of the mask and/envelope of any processing object (such as the left thumb) can be used to represent the position of the processing object, and the mask and/envelope shape of any processing object (such as the left thumb) can be used Indicates the shape of the processing object. Fig. 33 shows a schematic diagram of processing objects in an unstructured light channel image according to an embodiment of the present application. In an example, it may be determined whether the position of any processing object (that is, the position of its mask and/or envelope) is in the central area of the current unstructured light channel image. The size of the central area can be set as required, and it at least includes the central point of the unstructured light channel image. If any object to be processed (for example, the left thumb) is in the central area of the unstructured light channel image, its position can be determined to be qualified; otherwise, its position can be determined to be unqualified. As shown in FIG. 33 , point r is the center point of the unstructured light channel image. According to the position of the envelope of the processing object, it can be judged that the position of the processing object is not in the central area of the current unstructured light channel image, so its position is unqualified. In another example, different position determination criteria may be set for different fingers. For example, in the case where the target object is four fingers, it can be determined whether the top of the middle finger (which may be the top of the middle finger's mask or envelope) is higher than the first height position on the unstructured light channel image, and if so, determine The position of the middle finger is unqualified; it can also be judged whether the top of the little finger (which can be the top of the mask or envelope of the little finger) is lower than the second height position on the unstructured light channel image, and if so, determine that the position of the little finger is unqualified ; It is also possible to determine whether the leftmost point (which may be the leftmost point of the mask or envelope of the leftmost finger) located on the leftmost finger among the four fingers is located on the left side of the first width position of the unstructured light channel image , if yes, determine that the position of the leftmost finger is unqualified; it can also be judged whether the leftmost point on the rightmost finger among the four fingers (which can be the leftmost point of the mask or envelope of the leftmost finger) It is located on the right side of the second width position of the unstructured light channel image, if yes, it is determined that the position of the rightmost finger is unqualified. The first height position, the second height position, the first width position and the second width position can all be set as required. In an example, the first height position may be 0.1H from the upper edge of the unstructured light channel image, the second height position may be 0.3H from the lower edge of the unstructured light channel image, and the first width position may be a distance from At 0.1W from the left edge of the unstructured light channel image, the second width position can be 0.1W from the right edge of the unstructured light channel image, H is the entire height of the unstructured light channel, and W is the entire width of the unstructured light channel .
在一个示例中,可以判断任一处理对象的形状(即其掩膜和/或包络的形状)在当前非结构光通道图像中是否相对于该非结构光通道图像的轴线发生倾斜。非结构光通道图像的轴线可以是非结构光通道图像的沿预设方向(例如图像的高度方向)延伸的中心线。例如,可以确定任一处理对象的形状相对于当前非结构光通道图像的轴线之间的倾斜角,如果该倾斜角小于预设角度阈值,则可以确定该处理对象的偏航角合格,否则确定该处 理对象的偏航角不合格。预设角度阈值可以根据需要设定为任意合适的大小。图34示出了根据本申请一个实施例的非结构光通道图像的示意图,如图34所示,其中虚线M1N1表示非结构光通道图像的轴线,点画线M2N2表示处理对象的轴线。该处理对象的轴线相对于当前非结构光通道图像的轴线之间的倾斜角为角α。如果该倾斜角α小于预设角度阈值,则可以确定该处理对象的偏航角合格,否则确定该处理对象的偏航角不合格。In an example, it may be determined whether the shape of any object to be processed (that is, the shape of its mask and/or envelope) in the current unstructured light channel image is tilted relative to the axis of the unstructured light channel image. The axis of the unstructured light channel image may be a center line of the unstructured light channel image extending along a preset direction (for example, the height direction of the image). For example, the inclination angle between the shape of any processing object and the axis of the current unstructured light channel image can be determined, if the inclination angle is less than the preset angle threshold, it can be determined that the yaw angle of the processing object is qualified, otherwise it can be determined the place The yaw angle of the processing object is unqualified. The preset angle threshold can be set to any suitable value as required. Fig. 34 shows a schematic diagram of an unstructured light channel image according to an embodiment of the present application. As shown in Fig. 34, the dotted line M1N1 indicates the axis of the unstructured light channel image, and the dotted line M2N2 indicates the axis of the processing object. The inclination angle between the axis of the processing object and the axis of the current unstructured light channel image is angle α. If the inclination angle α is smaller than the preset angle threshold, it may be determined that the yaw angle of the processing object is qualified, otherwise it is determined that the yaw angle of the processing object is unqualified.
根据上述技术方案,通过结构光通道下的结构光重复单元确定处理对象的俯仰角和/或横滚角;通过非结构光通道下的处理对象的位置和/或形状,确定待处理对象的位置和/或偏航角,该方法可以针对不同的姿态判断需求,采用不同的判断方式,由此可以保证判断结果的准确性。According to the above technical solution, the pitch angle and/or roll angle of the processing object is determined through the structured light repeating unit under the structured light channel; the position of the object to be processed is determined through the position and/or shape of the processing object under the unstructured light channel And/or yaw angle, this method can adopt different judgment methods according to different attitude judgment requirements, so as to ensure the accuracy of judgment results.
示例性地,姿态判断对象为包括多个处理对象的目标对象,基于预览手纹图像确定姿态判断对象的姿态是否合格可以包括:根据预览手纹图像结构光通道中两个处理对象各自对应的第四结构光重复单元的高度,确定目标对象的横滚角;或者,根据预览手纹图像非结构光通道中目标对象的位置和/或形状,确定目标对象的位置和/或偏航角;预览手纹图像是在非结构光光源照射下采集的,预览手纹图像包括非结构光通道。Exemplarily, the gesture judging object is a target object including multiple processing objects, and determining whether the posture of the gesture judging object is qualified based on the previewed handprint image may include: The height of the four structured light repeating units determines the roll angle of the target object; or, according to the position and/or shape of the target object in the unstructured light channel of the preview handprint image, determines the position and/or yaw angle of the target object; preview The handprint image is collected under the illumination of an unstructured light source, and the preview of the handprint image includes an unstructured light channel.
在一个实施例中,姿态判断对象可以是包括多个处理对象的目标对象。此时,姿态指标中除了第一姿态指标之外的其他姿态指标可以用于整体计算。例如整体计算目标对象的位置、横滚角等,而不再单独计算各处理对象的姿态指标。如此,可提高计算速度。例如,姿态判断对象可以包括四指,即食指、中指、无名指、小手指。根据预览手纹图像结构光通道中任意两个手指(例如食指和小手指)各自对应的第四结构光重复单元的高度,可以确定四指整体的横滚角。需注意,两个处理对象各自对应的第四结构光重复单元为结构光中的同一结构光重复单元(例如同一个条带)。第四结构光重复单元可以沿垂直于处理对象的轴线的方向延伸。结构光重复单元为条带的情况下其高度的确定方式可以参考上文描述。通过两个处理对象各自对应的第四结构光重复单元的高度之间的高度差,可以判断目标对象整体的横滚角是否合格。例如,两个处理对象各自对应的第四结构光重复单元的高度之间的高度差大于第五高度阈值,则可以确定该目标对象的横滚角不合格;反之,合格。第五高度阈值可以根据需要设定为任意合适的大小,例如5mm。In one embodiment, the gesture judgment object may be a target object including a plurality of processing objects. At this time, other attitude indicators in the attitude indicators except the first attitude indicator may be used for the overall calculation. For example, the position and roll angle of the target object are calculated as a whole, instead of the attitude index of each processing object being calculated separately. In this way, calculation speed can be increased. For example, the posture judgment object may include four fingers, that is, index finger, middle finger, ring finger, and little finger. According to the heights of the fourth structured light repeating units corresponding to any two fingers (such as the index finger and the little finger) in the structured light channel of the previewed handprint image, the overall roll angle of the four fingers can be determined. It should be noted that the fourth structured light repeating unit corresponding to the two processing objects is the same structured light repeating unit (for example, the same strip) in the structured light. The fourth structured light repeating unit may extend in a direction perpendicular to the axis of the treatment object. For the method of determining the height of the structured light repeating unit in the case of strips, reference can be made to the above description. According to the height difference between the heights of the fourth structured light repeating units corresponding to the two processing objects, it can be judged whether the overall roll angle of the target object is qualified. For example, if the height difference between the heights of the fourth structured light repeating units corresponding to two processing objects is greater than the fifth height threshold, it can be determined that the roll angle of the target object is unqualified; otherwise, it is qualified. The fifth height threshold can be set to any suitable size as required, for example, 5mm.
本领域内技术人员通过前文实施例中关于“根据预览手纹图像非结构光通道中处理对象的位置和/或形状,确定该处理对象的位置和/或偏航角”的相关描述,可以理解“根据预览手纹图像非结构光通道中目标对象的位置和/或形状,确定目标对象的位置和/或偏航角”的实现方式,这二者是类似的,为了简洁,在此不再赘述。Those skilled in the art can understand from the relevant description of "determining the position and/or yaw angle of the processing object according to the position and/or shape of the processing object in the unstructured light channel of the previewed handprint image" in the foregoing embodiments. The implementation of "determining the position and/or yaw angle of the target object according to the position and/or shape of the target object in the unstructured light channel of the preview handprint image" is similar, and for the sake of brevity, it will not be repeated here repeat.
根据上述技术方案,可以根据预览手纹图像在结构光通道和非结构光 通道下的图像,确定目标对象整体的横滚角、位置和/或形状。该方法可以用于整体判断目标对象的姿态是否合格,效率高且可以保证准确性。According to the above technical scheme, the structured light channel and unstructured light can be The image under the channel to determine the overall roll angle, position and/or shape of the object of interest. This method can be used to judge whether the posture of the target object is qualified as a whole, and the efficiency is high and the accuracy can be guaranteed.
示例性地,预览手纹图像是在结构光光源照射时采集的,姿态判断对象的横滚角和/或俯仰角是根据预览手纹图像的结构光通道中包括的结构光单元中选取的多个第五结构光重复单元确定的,处理对象的三维信息是根据待处理手纹图像的结构光通道中包括的结构光单元中选取的多个第六结构光重复单元确定的,第五结构光重复单元的密度小于第六结构光重复单元的密度。Exemplarily, the preview handprint image is collected when the structured light source is irradiated, and the roll angle and/or pitch angle of the attitude judgment object is selected according to the number of structured light units included in the structured light channel of the preview handprint image. The fifth structured light repeating unit is determined, the three-dimensional information of the processing object is determined according to a plurality of sixth structured light repeating units selected from the structured light channels included in the structured light channel of the handprint image to be processed, and the fifth structured light The density of the repeat unit is less than the density of the sixth structured light repeat unit.
在一个实施例中,利用结构光照射手纹采集区,可以获得预览手纹图像。根据结构光照射下获得的预览手纹图像,可以通过其中包含的多个第五结构光重复单元,通过上述实施例描述的方法确定姿态判断对象的横滚角和/或俯仰角。同时,可以根据待处理手纹图像的结构光通道中包括的结构光单元中选取至少部分第六结构光重复单元,以用于进行三维重建,确定的处理对象的三维信息。其中第六结构光重复单元的密度大于第五结构光重复单元的密度。也就是说,用于确定姿态的结构光重复单元分布更稀疏,而用于三维重建的结构光重复单元的分布更密集。例如,在结构光重复单元为条带的情况下,可以用全部条带进行三维重建,而从中采样部分条带确定姿态判断对象的横滚角和/或俯仰角。In one embodiment, structured light is used to irradiate the handprint collection area to obtain a preview image of the handprint. According to the preview handprint image obtained under structured light irradiation, the roll angle and/or pitch angle of the attitude judgment object can be determined through the method described in the above embodiment through the plurality of fifth structured light repeating units contained therein. At the same time, at least part of the sixth structured light repeating unit may be selected from the structured light units included in the structured light channel of the handprint image to be processed for 3D reconstruction to determine the 3D information of the processing object. Wherein the density of the sixth structured light repeating unit is greater than the density of the fifth structured light repeating unit. That is to say, the distribution of structured light repeating units for pose determination is sparser, while the distribution of structured light repeating units for 3D reconstruction is denser. For example, in the case where the structured light repeating unit is a strip, all the strips may be used for 3D reconstruction, and a part of the strips may be sampled to determine the roll angle and/or pitch angle of the attitude judgment object.
根据上述技术方案,通过密度低的第五结构光重复单元确定姿态判断对象的横滚角和/或俯仰角,这种方案计算效率高。此外,通过密度高的第六结构光重复单元确定处理对象的三维信息,这样可以保证所确定的三维信息的准确性。According to the above technical solution, the roll angle and/or pitch angle of the attitude judgment object is determined by the fifth structured light repeating unit with low density, and this solution has high calculation efficiency. In addition, the 3D information of the processing object is determined through the sixth structured light repeating unit with high density, which can ensure the accuracy of the determined 3D information.
示例性地,基于预览手纹图像确定姿态判断对象的姿态是否合格可以包括:判断姿态判断对象的位置和/或偏航角是否合格;如果姿态判断对象的位置和/或偏航角合格,则判断姿态判断对象的横滚角和/或俯仰角是否合格;如果姿态判断对象的横滚角和/或俯仰角合格,则姿态判断对象的姿态合格。Exemplarily, determining whether the posture of the gesture judging object is qualified based on the previewed handprint image may include: judging whether the position and/or yaw angle of the posture judging object are qualified; if the position and/or yaw angle of the posture judging object are qualified, then Judging whether the roll angle and/or pitch angle of the attitude judgment object is qualified; if the roll angle and/or pitch angle of the attitude judgment object are qualified, the attitude of the attitude judgment object is qualified.
在一个实施例中,可基于各指标计算的算力消耗,来确定各指标判断顺序,先判断算力消耗较少的指标。例如,基于预览手纹图像确定姿态判断对象的姿态是否合格,可以首先判断姿态判断对象的位置和/或偏航角是否合格。在姿态判断对象的位置和/或偏航角合格的情况下,再判断姿态判断对象的横滚角和/或俯仰角是否合格。如果姿态判断对象的位置和/或偏航角不合格,则可以确定姿态判断对象的姿态不合格,此时可以可选地停止判断,并可以可选地利用预览图像采集装置重新采集预览手纹图像。在姿态判断对象的横滚角和/或俯仰角合格的情况下,可以确定姿态判断对象的姿态为合格。在姿态判断对象的横滚角和/或俯仰角不合格的情况下,可以确定姿态判断对象的姿态不合格,此时可以可选地利用预览图像采集装置 重新采集预览手纹图像。In one embodiment, the judgment order of each index can be determined based on the computing power consumption calculated by each index, and the index with less computing power consumption is judged first. For example, to determine whether the posture of the posture judging object is qualified based on the previewed handprint image, it may first be judged whether the position and/or yaw angle of the posture judging object is qualified. If the position and/or the yaw angle of the attitude judgment object are qualified, then it is judged whether the roll angle and/or the pitch angle of the attitude judgment object are qualified. If the position and/or yaw angle of the attitude judging object are unqualified, it can be determined that the attitude of the attitude judging object is unqualified. At this time, the judgment can be optionally stopped, and the preview image collection device can optionally be used to re-acquire the preview handprint image. If the roll angle and/or the pitch angle of the attitude judgment object are qualified, it may be determined that the attitude of the attitude judgment object is qualified. In the case that the roll angle and/or pitch angle of the attitude judging object are unqualified, it can be determined that the attitude of the attitude judging object is unqualified, and at this time, the preview image acquisition device can optionally be used Reacquire the preview image of the handprint.
根据上述技术方案,由于判断姿态判断对象的位置和/或偏航角等算法通常相较于判断横滚角和/或俯仰角的算法消耗更小,因此按照本方案的顺序依次判断,可以减少资源的浪费,提高效率。According to the above technical solution, since the algorithm for judging the position and/or yaw angle of the attitude judging object is generally less expensive than the algorithm for judging the roll angle and/or pitch angle, it can be judged sequentially according to the sequence of this scheme, which can reduce Waste of resources, improve efficiency.
示例性地,姿态判断对象为处理对象,在处理步骤S3040之前,方法还可以包括:判断目标对象包含的各处理对象所对应的待处理手纹图像的数目是否均达到预设数目阈值,若是,则转至处理步骤,否则,则返回执行至少获取步骤S3030;根据该处理对象对应的待处理手纹图像的结构光通道确定该处理对象的三维信息,可以包括:根据该处理对象对应的选定待处理手纹图像的结构光通道确定该处理对象的三维信息,选定待处理手纹图像是该处理对象对应的预设数目阈值个待处理手纹图像中满足质量要求的待处理手纹图像。Exemplarily, the posture judgment object is the processing object. Before the processing step S3040, the method may further include: judging whether the number of handprint images to be processed corresponding to each processing object included in the target object reaches a preset number threshold, and if so, Then go to the processing step, otherwise, return to at least obtain step S3030; determine the three-dimensional information of the processing object according to the structured light channel of the to-be-processed handprint image corresponding to the processing object, which may include: according to the selected The structured light channel of the handprint image to be processed determines the three-dimensional information of the processing object, and the selected handprint image to be processed is the handprint image to be processed that meets the quality requirements among the preset number threshold handprint images to be processed corresponding to the processing object .
在一个实施例中,目标对象中可以包括至少一个处理对象,对于每个处理对象,都需要获取足够多个(例如预设数目阈值个)待处理图像,从而再在步骤S3040中从中选择处理对象对应的质量最好的待处理图像进行后续处理。在处理步骤S3040之前,可以预先设置待处理手纹图像的预设数目阈值。该预设数目阈值可以为任意大于0的整数,例如5、6、8等。示例性地,预设数目阈值可以等于5。例如,对于目标对象包含双拇指的情况,针对每个处理对象(每个拇指)均可以判断与该处理对象相对应的待处理手纹图像的数目是否达到预设数目阈值5。其中,每个处理对象所对应的待处理手纹图像是在该处理对象在姿态合格的情况下所采集的图像。对于左手拇指,如果当前其对应的待处理手纹图像不足5个,可以继续执行前文实施例描述的获取步骤S3030或者执行前文实施例描述的采集步骤S3010至获取步骤S3030,直至左手拇指对应的待处理手纹图像的数目达到5个。示例性地,如果某次姿态判断确定处理对象的姿态是合格的,并且在判断合格之后的较短时间内已采集了处理对象所对应的多个目标手纹图像且从中确定了待处理手纹图像,则可以仅执行获取步骤S3030。示例性地,也可以重新判断姿态,即可以重新执行采集步骤S3010-姿态判断步骤S3020-获取步骤S3030这整个流程,来获得新的待处理手纹图像。同理地,对于右手拇指,如果当前其对应的待处理手纹图像不足5个,可以继续执行前文实施例描述的获取步骤S3030或者执行前文实施例描述的采集步骤S3010至获取步骤S3030,直至右手拇指对应的待处理手纹图像的数目达到5个。如果当前左手拇指和右手拇指对应的待处理手纹图像均已经达到5个,可以开始执行处理步骤S3040。In one embodiment, the target object may include at least one processing object, and for each processing object, it is necessary to acquire enough images to be processed (for example, a preset threshold number), so as to select the processing object therefrom in step S3040 The corresponding image to be processed with the best quality is subjected to subsequent processing. Before processing step S3040, a preset number threshold of handprint images to be processed may be set in advance. The preset number threshold may be any integer greater than 0, such as 5, 6, 8 and so on. Exemplarily, the preset number threshold may be equal to five. For example, in the case that the target object includes two thumbs, for each object to be processed (each thumb), it may be determined whether the number of handprint images corresponding to the object to be processed reaches the preset number threshold 5. Wherein, the to-be-processed handprint image corresponding to each processing object is an image collected when the processing object is in a qualified posture. For the left thumb, if there are currently less than 5 corresponding handprint images to be processed, you can continue to execute the acquisition step S3030 described in the previous embodiment or execute the acquisition step S3010 to the acquisition step S3030 described in the previous embodiment until the left thumb corresponds to the handprint image to be processed. The number of processed handprint images reaches 5. Exemplarily, if a gesture judgment determines that the posture of the processing object is qualified, and within a short period of time after the judgment is qualified, a plurality of target handprint images corresponding to the processing object have been collected and the handprint to be processed is determined. image, then only the acquiring step S3030 may be performed. Exemplarily, the posture can also be re-judged, that is, the entire process of collecting step S3010-posture judging step S3020-obtaining step S3030 can be re-executed to obtain a new handprint image to be processed. Similarly, for the right thumb, if there are currently less than 5 corresponding handprint images to be processed, you can continue to execute the acquisition step S3030 described in the previous embodiment or execute the acquisition step S3010 to the acquisition step S3030 described in the previous embodiment until the right hand The number of to-be-processed handprint images corresponding to the thumb reaches 5. If the number of handprint images to be processed corresponding to the left thumb and the right thumb has reached 5, the processing step S3040 can be started.
从任一处理对象对应的预设数目阈值个待处理手纹图像中,选择满足质量要求的待处理手纹图像作为选定待处理手纹图像,并根据该选定待处理手纹图像的结构光通道,确定该处理对象的三维信息。质量要求可以根 据需要设定,例如可以是质量最高。From the preset number threshold of handprint images to be processed corresponding to any processing object, select the handprint image to be processed that meets the quality requirements as the selected handprint image to be processed, and according to the structure of the selected handprint image to be processed Optical channel, to determine the 3D information of the object to be processed. Quality requirements can be based on Set as required, for example, it can be the highest quality.
根据上述技术方案,针对单个处理对象保证其待处理手纹图像的数目达到预设数目阈值,这种方案实现比较灵活,哪个处理对象姿态合格则可以获取其对应的待处理手纹图像以使所对应处理对象的待处理图像尽快达到预设数目阈值,因此效率比较高。此外,根据上述方案,从处理对象对应的预设数目阈值个待处理手纹图像中选择满足质量要求的待处理手纹图像,以从中确定处理对象的三维信息。该方法可以保证用于确定三维信息的待处理手纹图像的图像质量满足质量要求,以提高所确定的三维信息的准确性。According to the above technical solution, for a single object to be processed, it is ensured that the number of handprint images to be processed reaches the preset number threshold. The to-be-processed images corresponding to the processing object reach the preset number threshold as soon as possible, so the efficiency is relatively high. In addition, according to the above solution, the handprint images to be processed that meet the quality requirements are selected from the preset threshold number of handprint images to be processed corresponding to the processing object, so as to determine the three-dimensional information of the processing object therefrom. The method can ensure that the image quality of the handprint image to be processed for determining the three-dimensional information meets the quality requirements, so as to improve the accuracy of the determined three-dimensional information.
示例性地,姿态判断对象为目标对象,在处理步骤之前,方法还可以包括:判断目标对象所对应的目标手纹图像的数目是否均达到预设数目阈值,若是,则转至处理步骤,否则,则返回执行至少获取步骤;根据该处理对象对应的待处理手纹图像的结构光通道确定该处理对象的三维信息,可以包括:根据该处理对象对应的选定待处理手纹图像的结构光通道确定该处理对象的三维信息,选定待处理手纹图像是该处理对象对应的预设数目阈值个待处理手纹图像中满足质量要求的待处理手纹图像。Exemplarily, the posture judgment object is a target object, and before the processing step, the method may further include: judging whether the number of target handprint images corresponding to the target object reaches a preset number threshold, if so, go to the processing step, otherwise , return to perform at least the acquisition step; determining the three-dimensional information of the processing object according to the structured light channel of the handprint image to be processed corresponding to the processing object may include: according to the structured light of the selected handprint image to be processed corresponding to the processing object The channel determines the three-dimensional information of the object to be processed, and the selected handprint image to be processed is the handprint image to be processed that meets the quality requirements among the preset number of threshold handprint images to be processed corresponding to the object to be processed.
在本实施例中,姿态判断对象是目标对象,此时目标对象包括的各处理对象的目标手纹图像必然同时达到预设数量阈值,因为各处理对象的姿态总是同时合格。可理解的是,即便各处理对象的姿态总是同时合格,各处理对象对应的目标图像采集装置可以是不同的。假设预设数目阈值为3个,当前食指、中指、无名指和小指均已获得3个目标指纹图像(分别由C1、C1、C2、C2相机采集),则需要在四指姿态合格时对每个处理对象继续采集2张目标指纹图像。继续采集时可以继续执行前文实施例描述的获取步骤S3030或者执行前文实施例描述的采集步骤S3010至获取步骤S3030,直至目标对象所对应的目标手纹图像的数目达到3个。示例性地,如果某次姿态判断确定目标对象的姿态是合格的,并且在判断合格之后的较短时间内已采集了目标对象所对应的多个目标手纹图像,则可以仅执行获取步骤S3030。示例性地,也可以重新判断姿态,即可以重新执行采集步骤S3010-姿态判断步骤S3020-获取步骤S3030这整个流程,来获得新的目标手纹图像。In this embodiment, the posture judgment object is the target object. At this time, the target handprint images of each processing object included in the target object must reach the preset number threshold at the same time, because the postures of each processing object are always qualified at the same time. It can be understood that, even if the poses of the processing objects are always qualified at the same time, the target image acquisition devices corresponding to the processing objects may be different. Assuming that the preset number threshold is 3, and the current index finger, middle finger, ring finger and little finger have obtained 3 target fingerprint images (collected by C1, C1, C2, and C2 cameras respectively), it is necessary to perform each The processing object continues to collect 2 target fingerprint images. When continuing to collect, you can continue to execute the acquisition step S3030 described in the previous embodiment or execute the acquisition step S3010 to the acquisition step S3030 described in the previous embodiment until the number of target handprint images corresponding to the target object reaches 3. Exemplarily, if a posture judgment determines that the posture of the target object is qualified, and multiple target handprint images corresponding to the target object have been collected within a short period of time after the judgment is qualified, only the acquisition step S3030 can be performed . Exemplarily, the posture can also be re-judged, that is, the entire process of collecting step S3010-posture judging step S3020-obtaining step S3030 can be re-executed to obtain a new target handprint image.
“根据该处理对象对应的待处理手纹图像的结构光通道确定该处理对象的三维信息”的实现方式与上一实施例类似,此处不赘述。The implementation of "determining the three-dimensional information of the processing object according to the structured light channel of the handprint image to be processed corresponding to the processing object" is similar to that of the previous embodiment, and will not be repeated here.
根据上述技术方案,针对整个目标对象保证其目标手纹图像的数目达到预设数目阈值,这种方案要求比较严格,需要整个目标对象的姿态合格才可以获取其对应的目标手纹图像(若某个处理对象姿态不合格、其他处理对象姿态合格,则目标对象的姿态整体不合格,此时不获得任何处理对象的目标指纹图像)并要求所获取的图像达到预设数目阈值,因此对目标 对象的姿态要求比较高,所获得的目标手纹图像的质量也就比较高,更有助于后续的手纹识别等操作。此外,根据上述方案,从处理对象对应的预设数目阈值个待处理手纹图像中选择满足质量要求的待处理手纹图像,以从中确定处理对象的三维信息。该方法可以保证用于确定三维信息的待处理手纹图像的图像质量满足质量要求,以提高所确定的三维信息的准确性。According to the above-mentioned technical scheme, it is guaranteed that the number of target handprint images reaches the preset number threshold for the whole target object. If the posture of one processing object is unqualified and the postures of other processing objects are qualified, the overall posture of the target object is unqualified, and no target fingerprint image of any processing object is obtained at this time) and the acquired images are required to reach the preset number threshold, so the target The posture requirements of the object are relatively high, and the quality of the obtained target handprint image is relatively high, which is more helpful for subsequent operations such as handprint recognition. In addition, according to the above solution, the handprint images to be processed that meet the quality requirements are selected from the preset threshold number of handprint images to be processed corresponding to the processing object, so as to determine the three-dimensional information of the processing object therefrom. The method can ensure that the image quality of the handprint image to be processed for determining the three-dimensional information meets the quality requirements, so as to improve the accuracy of the determined three-dimensional information.
根据本申请的另一方面,还提供了一种非接触式手纹采集设备。图35示出了根据本申请一个实施例的非接触式手纹采集设备3500的示意性框图,如图35所示,该设备3500可以包括处理器3510和存储器3520,其中,存储器3520中存储有计算机程序指令,计算机程序指令被处理器3510运行时用于执行上述的图像拼接方法400、非接触式目标对象手纹的采集方法3000或利用手纹采集系统200实现的手纹采集方法(即上述在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时采集图像的方案)。According to another aspect of the present application, a non-contact handprint collection device is also provided. FIG. 35 shows a schematic block diagram of a non-contact handprint collection device 3500 according to an embodiment of the present application. As shown in FIG. Computer program instructions, when the computer program instructions are run by the processor 3510, are used to execute the above-mentioned image stitching method 400, the non-contact target object handprint collection method 3000 or the handprint collection method realized by the handprint collection system 200 (that is, the above-mentioned A scheme for acquiring images when the blue light source group emits light with a blue emission scheme, and the green light source group emits light with a green emission scheme).
本领域内普通技术人员通过前文实施例的非接触式目标对象手纹的采集方法3000的相关描述,可以理解该非接触式手纹采集设备的实现方式及其有益效果,为了简洁,在此不再赘述。Those of ordinary skill in the art can understand the implementation of the non-contact handprint collection device and its beneficial effects through the relevant description of the non-contact target object handprint collection method 3000 in the foregoing embodiment. Let me repeat.
此外,根据本申请实施例,还提供了一种存储介质,在存储介质上存储了程序指令,在程序指令被计算机或处理器运行时用于执行上述图像拼接方法400、非接触式目标对象手纹的采集方法3000或利用手纹采集系统200实现的手纹采集方法(即上述在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时采集图像的方案)的相应步骤。存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。In addition, according to the embodiment of the present application, a storage medium is also provided, on which program instructions are stored, and when the program instructions are executed by a computer or a processor, they are used to perform the above image stitching method 400, non-contact target object hand The collection method 3000 of fingerprints or the corresponding steps of the handprint collection method realized by the handprint collection system 200 (that is, the above-mentioned scheme of collecting images when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme) . The storage medium may include, for example, a memory card of a smart phone, a storage unit of a tablet computer, a hard disk of a personal computer, a read only memory (ROM), an erasable programmable read only memory (EPROM), a portable compact disk read only memory (CD), etc. -ROM), USB memory, or any combination of the above storage media.
此外,根据本申请实施例,还提供了一种计算机程序产品,计算机程序产品包括计算机程序,计算机程序在运行时用于执行上述图像拼接方法400、非接触式目标对象手纹的采集方法3000或利用手纹采集系统200实现的手纹采集方法(即上述在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时采集图像的方案)。In addition, according to an embodiment of the present application, a computer program product is also provided, the computer program product includes a computer program, and the computer program is used to execute the above image stitching method 400, the non-contact target object handprint collection method 3000 or The handprint collection method implemented by the handprint collection system 200 (that is, the above-mentioned scheme for collecting images when the blue light source group emits light in a blue light-emitting scheme, and the green light source group emits light in a green light-emitting scheme).
此外,根据本申请实施例,还提供了一种计算机程序,该计算机程序在运行时用于执行上述图像拼接方法400、非接触式目标对象手纹的采集方法3000或利用手纹采集系统200实现的手纹采集方法(即上述在蓝色光源组以蓝色发光方案发光、绿色光源组以绿色发光方案发光时采集图像的方案)。In addition, according to the embodiment of the present application, a computer program is also provided, and the computer program is used to execute the above-mentioned image stitching method 400, the non-contact target object handprint collection method 3000, or use the handprint collection system 200 to implement The handprint collection method (that is, the above scheme of collecting images when the blue light source group emits light with a blue light-emitting scheme, and the green light source group emits light with a green light-emitting scheme).
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the description provided herein, numerous specific details are set forth. However, it is understood that the embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
类似地,应当理解,为了精简本申请并帮助理解本申请的各个方面中 的一个或多个,在对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本申请的方法解释成反映如下意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。Similarly, it should be understood that in order to simplify the application and to facilitate understanding of the various aspects of the application In the description of example embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof. This method of application, however, is not to be interpreted as reflecting an intention that the claimed application requires more features than are expressly recited in each claim. Rather, as the corresponding claims reflect, the inventive point lies in that the corresponding technical problem may be solved by using less than all features of a single disclosed embodiment. Thus, the claims following this Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this application.
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。It will be appreciated by those skilled in the art that all features disclosed in this specification (including accompanying claims, abstract and drawings) and all features of any method or apparatus so disclosed may be used in any combination, except where the features are mutually exclusive. process or unit. Each feature disclosed in this specification (including accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
此外,本领域的技术人员能够理解,尽管在此描述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。In addition, those skilled in the art will understand that although some embodiments described herein include some features included in other embodiments but not others, combinations of features from different embodiments are meant to be within the scope of the present application. and form different embodiments. For example, in the claims, any one of the claimed embodiments can be used in any combination. The use of the words first, second, and third, etc. does not indicate any order. These words can be interpreted as names.
以上所述,仅为本申请的具体实施方式或对具体实施方式的说明,本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以权利要求的保护范围为准。 The above is only the specific implementation of the application or the description of the specific implementation. The scope of protection of the application is not limited thereto. Any person familiar with the technical field can easily Any changes or substitutions that come to mind should be covered within the protection scope of the present application. The protection scope of the present application should be based on the protection scope of the claims.

Claims (17)

  1. 一种手纹采集系统,包括:A handprint collection system, comprising:
    照明系统,所述照明系统包括一个或多个蓝色光源和一个或多个绿色光源,每个光源用于朝向手纹采集区发光,所述手纹采集区用于放置拍摄对象,所述拍摄对象包括单指、双拇指、四指、平掌、侧掌中至少一者;A lighting system, the lighting system includes one or more blue light sources and one or more green light sources, each light source is used to emit light toward the handprint collection area, and the handprint collection area is used to place the object to be photographed, and the photographing Objects include at least one of single finger, double thumb, four fingers, flat palm, and side palm;
    结构光投射器,所述结构光投射器用于向所述手纹采集区发射结构光;A structured light projector, the structured light projector is used to emit structured light to the handprint collection area;
    一个或多个图像采集装置,所述一个或多个图像采集装置用于在所述照明系统和所述结构光投射器向所述手纹采集区发光的同时采集所述手纹采集区的图像以获得所述拍摄对象的手纹图像;One or more image acquisition devices, the one or more image acquisition devices are used to acquire images of the handprint collection area while the illumination system and the structured light projector emit light to the handprint collection area to obtain a handprint image of the subject;
    处理装置,所述处理装置用于控制所述照明系统和所述结构光投射器发光并控制所述一个或多个图像采集装置采集所述手纹图像;所述处理装置还用于对所述一个或多个图像采集装置采集的手纹图像进行处理,得到所述一个或多个图像采集装置各自对应的结构光图像和照明光图像;对所述照明光图像进行融合处理和基于所述结构光图像的变换处理,得到处理后的手纹图像;A processing device, the processing device is used to control the lighting system and the structured light projector to emit light and control the one or more image acquisition devices to collect the handprint image; the processing device is also used to process the Process the handprint images collected by one or more image acquisition devices to obtain the structured light images and illumination light images corresponding to each of the one or more image acquisition devices; perform fusion processing on the illumination light images and based on the structure Transformation processing of the light image to obtain the processed handprint image;
    其中,所述照明光图像包括蓝光图像和绿光图像,用于进行所述融合处理的图像包括至少一个蓝光图像和至少一个绿光图像,所述用于进行所述融合处理的图像所对应的图像采集装置相同、拍摄对象相同、拍摄时间间隔在间隔范围内、拍摄的照明条件不同,所述照明条件不同包括所述照明系统的光源颜色和光照角度中至少一者不同。Wherein, the illumination light image includes a blue light image and a green light image, the images used for the fusion process include at least one blue light image and at least one green light image, and the images used for the fusion process correspond to The image acquisition devices are the same, the shooting objects are the same, the shooting time interval is within the interval range, and the shooting lighting conditions are different, and the different lighting conditions include at least one of the color of the light source and the lighting angle of the lighting system.
  2. 如权利要求1所述的系统,其中,所述蓝色光源发射的蓝光的波长小于430nm,所述绿色光源发射的绿光的波长大于540nm。The system of claim 1, wherein the blue light emitted by the blue light source has a wavelength less than 430 nm, and the green light emitted by the green light source has a wavelength greater than 540 nm.
  3. 如权利要求1或2所述的系统,其中,所述处理装置通过以下方式对所述照明光图像进行融合处理和基于所述结构光图像的变换处理:The system according to claim 1 or 2, wherein the processing device performs fusion processing on the illumination light image and transformation processing based on the structured light image in the following manner:
    对同一图像采集装置拍摄的至少一个所述照明光图像中的蓝光图像和绿光图像进行融合处理,得到融合图像;基于至少一个所述照明光图像所对应的结构光图像,对所述融合图像进行变换处理,得到处理后的手纹图像;Perform fusion processing on the blue light image and the green light image in at least one of the illumination light images captured by the same image acquisition device to obtain a fusion image; based on the structured light image corresponding to at least one of the illumination light images, the fusion image Carry out transformation processing, obtain the processed handprint image;
    或者,基于同一图像采集装置拍摄的至少一个所述照明光图像所对应的结构光图像,对至少一个所述照明光图像中的蓝色图像进行变换处理,得到变换后的蓝色图像,基于同一图像采集装置拍摄的至少一个所述照明光图像所对应的结构光图像,对至少一个所述照明光图像中的绿色图像进行变换处理,得到变换后的绿色图像,对所述变换后的蓝色图像和所述变换后的绿色图像进行融合处理,得到处理后的手纹图像。Or, based on the structured light image corresponding to at least one of the illumination light images captured by the same image acquisition device, the blue image in at least one of the illumination light images is transformed to obtain a transformed blue image, based on the same The structured light image corresponding to at least one of the illumination light images captured by the image acquisition device is transformed into a green image in at least one of the illumination light images to obtain a transformed green image, and the transformed blue The image and the converted green image are fused to obtain a processed handprint image.
  4. 如权利要求3所述的系统,其中,所述至少一个所述照明光图像包括第一时刻拍摄的第一照明光图像和第二时刻拍摄的第二照明图像;所述 处理装置通过以下方式对同一图像采集装置拍摄的至少一个所述照明光图像中的蓝光图像和绿光图像进行融合处理,得到融合图像:The system of claim 3, wherein said at least one said illumination light image comprises a first illumination light image taken at a first time instant and a second illumination light image taken at a second time instant; said The processing device performs fusion processing on the blue light image and the green light image in at least one of the illumination light images captured by the same image acquisition device in the following manner to obtain a fusion image:
    对同一图像采集装置在第一时刻拍摄的第一照明光图像中的第一蓝光图像和第一绿光图像进行融合处理,得到第一蓝绿融合图像,对同一图像采集装置在第二时刻拍摄的第二照明光图像中的第二蓝光图像和第二绿光图像进行融合处理,得到第二蓝绿融合图像,对所述第一蓝绿融合图像和所述第二蓝绿融合图像进行融合处理,得到所述融合图像。Fusion processing is performed on the first blue light image and the first green light image in the first illumination light image captured by the same image acquisition device at the first moment to obtain the first blue-green fusion image, which is captured by the same image acquisition device at the second moment Fusion processing is performed on the second blue-light image and the second green-light image in the second illumination light image to obtain a second blue-green fusion image, and fusion is performed on the first blue-green fusion image and the second blue-green fusion image processing to obtain the fused image.
  5. 如权利要求3所述的系统,其中,所述至少一个所述照明光图像包括第一时刻拍摄的第一照明光图像和第二时刻拍摄的第二照明图像;所述变换后的蓝色图像包括由所述第一照明光图像中的第一蓝色图像确定的第一变换蓝色图像和由所述第二照明光图像中的第二蓝色图像确定的第二变换蓝色图像,所述变换后的绿色图像包括由所述第一照明光图像中的第一绿色图像确定的第一变换绿色图像和由所述第二照明光图像中的第二绿色图像确定的第二变换绿色图像,The system of claim 3, wherein said at least one said illumination light image comprises a first illumination light image taken at a first moment and a second illumination image taken at a second moment; said transformed blue image comprising a first transformed blue image determined from a first blue image in said first illuminated light image and a second transformed blue image determined from a second blue image in said second illuminated light image, the The transformed green image includes a first transformed green image determined by a first green image in the first illuminated light image and a second transformed green image determined by a second green image in the second illuminated light image ,
    所述处理装置通过以下方式对所述变换后的蓝色图像和所述变换后的绿色图像进行融合处理,得到处理后的手纹图像:The processing device performs fusion processing on the transformed blue image and the transformed green image in the following manner to obtain a processed handprint image:
    对所述第一变换蓝色图像和所述第一变换绿色图像进行融合处理,得到第一变换融合图像,对所述第二变换蓝光图像和所述第二变换绿光图像进行融合处理,得到第二变换融合图像,对所述第一变换融合图像和所述第二变换融合图像进行融合处理,得到所述处理后的手纹图像。performing fusion processing on the first transformed blue image and the first transformed green image to obtain a first transformed fusion image; performing fusion processing on the second transformed blue image and the second transformed green image to obtain The second transformation fusion image is to perform fusion processing on the first transformation fusion image and the second transformation fusion image to obtain the processed handprint image.
  6. 如权利要求1或2所述的系统,其中,所述照明光图像包括同一图像采集装置在第一时刻拍摄的第一照明光图像和第二时刻拍摄的第二照明图像;所述处理装置通过以下方式对所述照明光图像进行融合处理和基于所述结构光图像的变换处理,得到处理后的手纹图像:The system according to claim 1 or 2, wherein the illumination light image comprises a first illumination light image taken by the same image acquisition device at the first moment and a second illumination image taken at the second moment; Perform fusion processing on the illumination light image and transformation processing based on the structured light image in the following manner to obtain the processed handprint image:
    对所述第一照明光图像中的第一蓝色图像和所述第一照明光图像中的第一绿色图像进行融合处理,得到第一蓝绿融合图像;对所述第二照明光图像中的第二蓝色图像和所述第二照明光图像中的第二绿色图像进行融合处理,得到第二蓝绿融合图像;基于所述第一照明光图像对应的第一结构光图像,对所述第一蓝绿融合图像进行变换处理,得到第一融合变换图像;基于所述第二照明光图像对应的第二结构光图像,对所述第二蓝绿融合图像进行变换处理,得到第二融合变换图像;对所述第一融合变换图像和所述第二融合变换图像进行融合处理,得到所述处理后的手纹图像。performing fusion processing on the first blue image in the first illumination light image and the first green image in the first illumination light image to obtain a first blue-green fusion image; The second blue image and the second green image in the second illumination light image are fused to obtain a second blue-green fusion image; based on the first structured light image corresponding to the first illumination light image, the Transformation processing is performed on the first blue-green fusion image to obtain a first fusion transformation image; based on the second structured light image corresponding to the second illumination light image, transformation processing is performed on the second blue-green fusion image to obtain a second Fusion and transformation of images: performing fusion processing on the first fusion transformation image and the second fusion transformation image to obtain the processed handprint image.
  7. 如权利要求1-6任一项所述的系统,其中,所述融合处理包括计算融合和/或神经网络模型融合;The system according to any one of claims 1-6, wherein said fusion processing comprises computational fusion and/or neural network model fusion;
    所述计算融合包括:计算用于进行融合处理的图像中第一位置的像素的清晰度指标,将用于进行融合处理的图像中清晰度指标更好的第一位置的像素的像素值作为融合后所得图像中第一位置的像素的像素值;The calculation fusion includes: calculating the sharpness index of the pixel at the first position in the image for fusion processing, and using the pixel value of the pixel at the first position with a better sharpness index in the image for fusion processing as the fusion The pixel value of the pixel at the first position in the resulting image;
    所述神经网络模型融合包括:将用于进行融合处理的图像中第二位置 的像素输入神经网络模型,得到融合后所得图像中第二位置的像素的像素值。The neural network model fusion includes: the second position in the image used for fusion processing Input the pixel of the neural network model to obtain the pixel value of the pixel at the second position in the fused image.
  8. 如权利要求7所述的系统,其中,所述融合处理包括计算融合和神经网络模型融合,The system according to claim 7, wherein said fusion processing comprises computational fusion and neural network model fusion,
    所述计算融合包括:计算用于进行融合处理的图像中第一位置的像素的清晰度指标,若用于进行融合处理的多个图像中清晰度指标的差距大于差距阈值,则将用于进行融合处理的图像中清晰度指标更好的第一位置的像素的像素值作为融合后所得图像中第一位置的像素的像素值;The calculation fusion includes: calculating the sharpness index of the pixel at the first position in the image used for fusion processing, and if the gap between the sharpness indexes in the multiple images used for fusion processing is greater than the gap threshold, the The pixel value of the pixel at the first position with a better definition index in the fusion processed image is used as the pixel value of the pixel at the first position in the obtained image after fusion;
    所述神经网络模型融合包括:将用于进行融合处理的图像中第二位置的像素输入神经网络模型,得到融合后所得图像中第二位置的像素的像素值;所述第二位置为用于进行融合处理的多个图像中清晰度指标的差距不大于所述差距阈值的位置。The fusion of the neural network model includes: inputting the pixel at the second position in the image for fusion processing into the neural network model to obtain the pixel value of the pixel at the second position in the obtained image after fusion; the second position is used for The position where the difference of the sharpness index among the multiple images subjected to fusion processing is not greater than the difference threshold.
  9. 如权利要求1-8任一项所述的系统,其中,一个或多个图像采集装置包括共同对同一拍摄对象进行拍摄的多个共同图像采集装置,所述处理装置通过以下方式对所述照明光图像进行融合处理和基于所述结构光图像的变换处理,得到处理后的手纹图像:The system according to any one of claims 1-8, wherein the one or more image acquisition devices comprise a plurality of common image acquisition devices jointly photographing the same subject, and the processing device controls the illumination by The light image is fused and processed based on the transformation process of the structured light image to obtain the processed handprint image:
    对所述照明光图像进行融合处理、基于所述结构光图像的变换处理、对所述共同图像采集装置所对应的变换处理后的图像进行拼接,得到处理后的手纹图像;所述拍摄对象包括单指、平掌或侧掌。performing fusion processing on the illumination light image, transformation processing based on the structured light image, and splicing the transformed images corresponding to the common image acquisition device to obtain a processed handprint image; the photographed object Including single finger, flat palm or side palm.
  10. 如权利要求9所述的系统,其中,所述处理装置通过以下方式对所述照明光图像进行融合处理、基于所述结构光图像的变换处理、对所述共同图像采集装置所对应的变换处理后的图像进行拼接,得到处理后的手纹图像:The system according to claim 9, wherein the processing device performs fusion processing on the illumination light image, transformation processing based on the structured light image, and transformation processing corresponding to the common image acquisition device in the following ways After the images are spliced, the processed handprint image is obtained:
    对于所述多个共同图像采集装置中每个图像采集装置,对该图像采集装置对应的至少一个照明光图像中的蓝色图像和绿色图像进行融合处理,得到该图像采集装置所对应的融合图像;基于该图像采集装置所对应的结构光图像,对该图像采集装置所对应的融合图像进行变换处理,得到该图像采集装置对应的变换融合图像;对所述多个共同图像采集装置对应的变换融合图像进行拼接,得到处理后的手纹图像;For each image acquisition device in the plurality of common image acquisition devices, fusion processing is performed on the blue image and the green image in at least one illumination light image corresponding to the image acquisition device to obtain a fusion image corresponding to the image acquisition device ; Based on the structured light image corresponding to the image acquisition device, the fusion image corresponding to the image acquisition device is transformed to obtain the transformed fusion image corresponding to the image acquisition device; the transformation corresponding to the plurality of common image acquisition devices Merge the images for splicing to obtain the processed handprint image;
    或者,对于所述多个共同图像采集装置中的每个图像采集装置,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的蓝色图像进行变换处理,得到该图像采集装置对应的变换蓝色图像,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的绿色图像进行变换处理,得到该图像采集装置对应的变换绿色图像,对该图像采集装置对应的变换蓝色图像和该图像采集装置对应的变换绿色图像进行蓝绿融合处理,得到该图像采集装置对应的变换融合图像;对所述多个共同图像采集装置对应的变换融合图像进行拼接,得到处理后的手纹图像; Or, for each image acquisition device in the plurality of common image acquisition devices, based on the structured light image corresponding to the image acquisition device, the blue image in at least one illumination light image corresponding to the image acquisition device is transformed processing to obtain the converted blue image corresponding to the image acquisition device, and based on the structured light image corresponding to the image acquisition device, perform conversion processing on the green image in at least one illumination light image corresponding to the image acquisition device to obtain the image acquisition transforming the green image corresponding to the device, performing blue-green fusion processing on the transformed blue image corresponding to the image acquisition device and the transformed green image corresponding to the image acquisition device, to obtain a transformed fusion image corresponding to the image acquisition device; The transformation and fusion images corresponding to the common image acquisition device are spliced to obtain the processed handprint image;
    或者,对于所述多个共同图像采集装置中的每个图像采集装置,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的蓝色图像进行变换处理,得到该图像采集装置对应的变换蓝色图像,基于该图像采集装置对应的结构光图像,对该图像采集装置所对应的至少一个照明光图像中的绿色图像进行变换处理,得到该图像采集装置对应的变换绿色图像;对所述多个共同图像采集装置对应的变换蓝色图像进行拼接,得到拼接变换蓝色图像;对多个图像采集装置对应的变换绿色图像进行拼接,得到拼接变换绿色图像;对所述拼接变换蓝色图像和所述拼接变换绿色图像进行融合处理,得到处理后的手纹图像。Or, for each image acquisition device in the plurality of common image acquisition devices, based on the structured light image corresponding to the image acquisition device, the blue image in at least one illumination light image corresponding to the image acquisition device is transformed processing to obtain the converted blue image corresponding to the image acquisition device, and based on the structured light image corresponding to the image acquisition device, perform conversion processing on the green image in at least one illumination light image corresponding to the image acquisition device to obtain the image acquisition The converted green image corresponding to the device; the converted blue images corresponding to the plurality of common image acquisition devices are spliced to obtain a spliced transformed blue image; the transformed green images corresponding to a plurality of image acquisition devices are spliced to obtain a spliced transformed green image image; performing fusion processing on the mosaic transformed blue image and the mosaic transformed green image to obtain a processed handprint image.
  11. 如权利要求9或10所述的系统,其中,所述结构光投射器包括光源、图案产生单元和会聚透镜,所述图案产生单元设置在所述光源的前方,所述光源用于将所述图案产生单元上的图案投影到投射平面上,以在所述投射平面形成重建结构光图案和拼接结构光图案,所述会聚透镜设置在所述图案产生单元和所述投射平面之间的光传输路径上;所述重建结构光图案和所述拼接结构光图案不同,所述重建结构光图案中的重建结构光单元和所述拼接结构光图案中的拼接结构光单元之间不存在边界重叠;所述重建结构光单元用于基于所述结构光图像进行变换处理,所述拼接结构光单元用于对所述多个共同图像采集装置所对应的变换处理后的图像进行拼接。The system according to claim 9 or 10, wherein the structured light projector comprises a light source, a pattern generating unit and a converging lens, the pattern generating unit is arranged in front of the light source, and the light source is used to convert the The pattern on the pattern generation unit is projected onto the projection plane to form a reconstructed structured light pattern and a spliced structured light pattern on the projection plane, and the light transmission of the convergent lens is arranged between the pattern generation unit and the projection plane On the path: the reconstructed structured light pattern is different from the stitched structured light pattern, and there is no border overlap between the reconstructed structured light unit in the reconstructed structured light pattern and the stitched structured light unit in the stitched structured light pattern; The reconstruction structured light unit is used for performing transformation processing based on the structured light image, and the stitching structured light unit is used for stitching the transformed images corresponding to the plurality of common image acquisition devices.
  12. 如权利要求11所述的系统,其中,重建结构光单元和拼接结构光单元满足以下条件中至少一者:The system according to claim 11, wherein the reconstructed structured light unit and the spliced structured light unit meet at least one of the following conditions:
    所述重建结构光单元包括条纹;The reconstructed structured light unit includes stripes;
    所述拼接结构光单元包括散点;The spliced structured light unit includes scattered points;
    所述拼接结构光单元的分布密度大于所述重建结构光单元的分布密度。The distribution density of the spliced structured light units is greater than the distribution density of the reconstructed structured light units.
  13. 如权利要求9-12任一项所述的系统,其中,所述多个共同图像采集装置包括第一共同图像采集装置、第二共同图像采集装置和第三共同图像采集装置,第一共同图像采集装置的光轴与所述手纹采集区的朝向所述多个共同图像采集装置的平面垂直,所述第二共同图像采集装置的光轴与所述第一共同图像采集装置的光轴成第一预设夹角,所述第三共同图像采集装置的光轴与所述第一共同图像采集装置的光轴成第二预设夹角;所述拍摄对象包括单指。The system according to any one of claims 9-12, wherein the plurality of common image acquisition devices comprises a first common image acquisition device, a second common image acquisition device and a third common image acquisition device, and the first common image acquisition device The optical axis of the collection device is perpendicular to the plane of the handprint collection area facing the plurality of common image collection devices, and the optical axis of the second common image collection device is at an angle to the optical axis of the first common image collection device. A first preset included angle, the optical axis of the third common image capture device and the optical axis of the first common image capture device form a second preset included angle; the photographed object includes a single finger.
  14. 如权利要求9-13任一项所述的系统,其中,所述多个共同图像采集装置包括第四共同图像采集装置、第五共同图像采集装置和第六共同图像采集装置;所述第四共同图像采集装置、所述第五共同图像采集装置和所述第六共同图像采集装置的镜头位于预定平面内且分别对准所述手纹采集区内的多个第一子区域,所述多个第一子区域中的任意相邻的两个彼此重叠或者邻接;所述拍摄对象包括平掌或侧掌。 The system according to any one of claims 9-13, wherein the plurality of common image acquisition devices comprises a fourth common image acquisition device, a fifth common image acquisition device and a sixth common image acquisition device; The lenses of the common image acquisition device, the fifth common image acquisition device, and the sixth common image acquisition device are located in a predetermined plane and are respectively aimed at a plurality of first sub-regions in the handprint acquisition area, and the multiple Any two adjacent ones of the first sub-regions overlap or adjoin each other; the subject includes a flat palm or a side palm.
  15. 如权利要求1-14任一项所述的手纹采集系统,其中,所述一个或多个图像采集装置包括各自对同一拍摄对象进行拍摄的多个独立图像采集装置,所述多个独立图像采集装置的可清晰成像物面子空间部分重叠,所述处理装置还用于根据预览图像采集装置采集的结构光图像和/或照明光图像从所述多个独立图像采集装置中确定可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置;用于进行融合处理的图像是由所述目标图像采集装置采集的;所述拍摄对象包括双拇指或四指;所述预览图像采集装置为所述多个独立图像采集装置中的全部图像采集装置,或者所述多个独立图像采集装置中视野最大的图像采集装置,或者不同于所述多个独立图像采集装置且焦距比所述多个独立图像采集装置更短的图像采集装置。The handprint collection system according to any one of claims 1-14, wherein said one or more image collection devices comprise a plurality of independent image collection devices each shooting the same subject, said multiple independent image collection devices The clear imageable object surface subspaces of the acquisition devices partially overlap, and the processing device is further used to determine the clearly imageable objects from the plurality of independent image acquisition devices according to the structured light image and/or the illumination light image collected by the preview image acquisition device A target image acquisition device whose surface subspace matches the position of the subject; the image used for fusion processing is collected by the target image capture device; the subject includes two thumbs or four fingers; the preview image capture device is All image acquisition devices in the plurality of independent image acquisition devices, or the image acquisition device with the largest field of view among the plurality of independent image acquisition devices, or different from the plurality of independent image acquisition devices and having a focal length ratio of the plurality of Standalone image capture unit Shorter image capture unit.
  16. 如权利要求15所述的手纹采集系统,其中,所述多个独立图像采集装置包括多个第一独立图像采集装置;所述多个第一独立图像采集装置的镜头围绕中心线设置且面向所述手纹采集区,其中,所述多个第一独立图像采集装置分别具有距离其最佳物面前后景深范围内的可清晰成像物面子空间,且所述多个第一独立图像采集装置的可清晰成像物面子空间部分地重叠,所述多个第一独立图像采集装置的可清晰成像物面子空间共同形成可清晰成像总空间,所述手纹采集区包括所述可清晰成像物面总空间,所述多个第一独立图像采集装置的最佳物面在所述手纹采集区内的位置不同;所述多个第一独立图像采集装置的镜头的焦距不同;和/或,所述多个第一独立图像采集装置的镜头的前端到所述手纹采集区的距离不同;The handprint collection system according to claim 15, wherein said multiple independent image acquisition devices comprise a plurality of first independent image acquisition devices; the lenses of said plurality of first independent image acquisition devices are arranged around the centerline and face to The handprint collection area, wherein, the plurality of first independent image acquisition devices respectively have subspaces that can be clearly imaged within the range of depth of field from the front and back of the best object, and the plurality of first independent image acquisition devices The clearly imageable object surface subspaces partially overlap, the clearly imageable object surface subspaces of the plurality of first independent image acquisition devices together form a clearly imageable total space, and the handprint collection area includes the clearly imageable object surface In the total space, the positions of the optimal object planes of the plurality of first independent image acquisition devices in the handprint acquisition area are different; the focal lengths of the lenses of the plurality of first independent image acquisition devices are different; and/or, The distances from the front ends of the lenses of the plurality of first independent image acquisition devices to the handprint acquisition area are different;
    所述处理装置通过以下方式根据预览图像采集装置采集的结构光图像和/或照明光图像从所述多个独立图像采集装置中确定可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置:The processing device determines from the plurality of independent image acquisition devices according to the structured light image and/or illumination light image collected by the preview image acquisition device in the following manner, the target image acquisition that can clearly image the subspace of the object surface and matches the position of the object to be photographed device:
    根据预览图像采集装置采集的结构光图像确定拍摄对象的高度;根据拍摄对象高度确定可清晰成像物面子空间与拍摄对象高度相匹配的目标图像采集装置;或者,根据预览图像采集装置采集的结构光图像确定拍摄对象所包含的各手指的高度;根据各手指的高度确定可清晰成像物面子空间与各手指高度相匹配的目标图像采集装置。Determine the height of the subject according to the structured light image collected by the preview image acquisition device; determine the target image acquisition device that can clearly image the subspace of the object surface and match the height of the subject according to the height of the subject; or, according to the structured light collected by the preview image acquisition device The image determines the height of each finger included in the photographed object; according to the height of each finger, a target image acquisition device that can clearly image the subspace of the object surface and matches the height of each finger is determined.
  17. 如权利要求15所述的手纹采集系统,其中,所述多个独立图像采集装置包括多个第八图像采集装置,所述多个第八图像采集装置的镜头位于预定平面内且分别对准所述手纹采集区内的多个第一子区域,所述多个第一子区域中的任意相邻的两个彼此重叠或者邻接;The handprint collection system according to claim 15, wherein said plurality of independent image acquisition devices comprises a plurality of eighth image acquisition devices, the lenses of said plurality of eighth image acquisition devices are located in a predetermined plane and are respectively aligned A plurality of first sub-regions in the handprint collection area, any adjacent two of the plurality of first sub-regions overlap or adjoin each other;
    所述处理装置通过以下方式根据预览图像采集装置采集的结构光图像和/或照明光图像从所述多个独立图像采集装置中确定可清晰成像物面子空间与拍摄对象位置相匹配的目标图像采集装置:The processing device determines from the plurality of independent image acquisition devices according to the structured light image and/or illumination light image collected by the preview image acquisition device in the following manner, the target image acquisition that can clearly image the subspace of the object surface and matches the position of the object to be photographed device:
    根据预览图像采集装置采集的结构光图像和/或照明光图像确定拍摄对象的水平位置;根据拍摄对象高度确定可清晰成像物面子空间与拍摄对 象水平位置相匹配的目标图像采集装置;Determine the horizontal position of the subject according to the structured light image and/or illumination light image collected by the preview image acquisition device; determine the subspace of the clearly imaged object surface and the subject according to the height of the subject The target image acquisition device matching the horizontal position of the image;
    或者,根据预览图像采集装置采集的结构光图像和/或照明光图像确定拍摄对象所包含的各手指的水平位置;根据各手指的水平位置确定可清晰成像物面子空间与各手指水平位置相匹配的目标图像采集装置。 Or, determine the horizontal position of each finger contained in the subject according to the structured light image and/or illumination light image collected by the preview image acquisition device; determine the subspace of the clearly imageable object surface according to the horizontal position of each finger to match the horizontal position of each finger The target image acquisition device.
PCT/CN2023/076009 2022-02-14 2023-02-14 Handprint acquisition system WO2023151698A1 (en)

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
CN202210135632.3 2022-02-14
CN202220304780.9 2022-02-14
CN202210135632.3A CN116631015A (en) 2022-02-14 2022-02-14 Fingerprint acquisition system and method
CN202220304780.9U CN218004132U (en) 2022-02-14 2022-02-14 Handprint collecting system
CN202210344908.9A CN114862673A (en) 2022-03-31 2022-03-31 Image splicing method, device and system and storage medium
CN202210344908.9 2022-03-31
CN202220755751.4 2022-03-31
CN202220755751.4U CN217767172U (en) 2022-03-31 2022-03-31 Structured light projector and image acquisition system
CN202221118200.3U CN217426154U (en) 2022-05-10 2022-05-10 Non-contact handprint collecting device
CN202221118200.3 2022-05-10
CN202320061548.1 2023-01-09
CN202320061548.1U CN219497088U (en) 2023-01-09 2023-01-09 Non-contact type palm print collecting equipment
CN202310029644.2A CN117218682A (en) 2023-01-09 2023-01-09 Non-contact target object hand pattern acquisition method and equipment thereof
CN202310029644.2 2023-01-09

Publications (1)

Publication Number Publication Date
WO2023151698A1 true WO2023151698A1 (en) 2023-08-17

Family

ID=87563720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/076009 WO2023151698A1 (en) 2022-02-14 2023-02-14 Handprint acquisition system

Country Status (1)

Country Link
WO (1) WO2023151698A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110064282A1 (en) * 2009-09-16 2011-03-17 General Electric Company Method and system for contactless fingerprint detection and verification
CN112016525A (en) * 2020-09-30 2020-12-01 墨奇科技(北京)有限公司 Non-contact fingerprint acquisition method and device
CN112232155A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Non-contact fingerprint identification method and device, terminal and storage medium
CN112232163A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN113614489A (en) * 2019-03-21 2021-11-05 砺铸智能装备私人有限公司 Monochromatic imaging using multiple wavelengths of light
CN114862673A (en) * 2022-03-31 2022-08-05 墨奇科技(北京)有限公司 Image splicing method, device and system and storage medium
CN217767172U (en) * 2022-03-31 2022-11-08 墨奇科技(北京)有限公司 Structured light projector and image acquisition system
CN218004132U (en) * 2022-02-14 2022-12-09 墨奇科技(北京)有限公司 Handprint collecting system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110064282A1 (en) * 2009-09-16 2011-03-17 General Electric Company Method and system for contactless fingerprint detection and verification
CN113614489A (en) * 2019-03-21 2021-11-05 砺铸智能装备私人有限公司 Monochromatic imaging using multiple wavelengths of light
CN112016525A (en) * 2020-09-30 2020-12-01 墨奇科技(北京)有限公司 Non-contact fingerprint acquisition method and device
CN112232155A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Non-contact fingerprint identification method and device, terminal and storage medium
CN112232163A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN218004132U (en) * 2022-02-14 2022-12-09 墨奇科技(北京)有限公司 Handprint collecting system
CN114862673A (en) * 2022-03-31 2022-08-05 墨奇科技(北京)有限公司 Image splicing method, device and system and storage medium
CN217767172U (en) * 2022-03-31 2022-11-08 墨奇科技(北京)有限公司 Structured light projector and image acquisition system

Similar Documents

Publication Publication Date Title
US11308711B2 (en) Enhanced contrast for object detection and characterization by optical imaging based on differences between images
TWI714131B (en) Control method, microprocessor, computer-readable storage medium and computer device
US7627196B2 (en) Image processing device and image capturing device
US8803963B2 (en) Vein pattern recognition based biometric system and methods thereof
US10916025B2 (en) Systems and methods for forming models of three-dimensional objects
US20150124086A1 (en) Hand and object tracking in three-dimensional space
TW201241547A (en) System, device and method for acquiring depth image
GB2400714A (en) Combined optical fingerprint recogniser and navigation control
CN108333860A (en) Control method, control device, depth camera and electronic device
CN112668540B (en) Biological characteristic acquisition and recognition system and method, terminal equipment and storage medium
CN104063679A (en) Blood vessel image taking device
CN111160136A (en) Standardized 3D information acquisition and measurement method and system
CN107622496A (en) Image processing method and device
WO2023151698A1 (en) Handprint acquisition system
JP4551626B2 (en) Stereo measurement method and stereo measurement device for hands
CN217767172U (en) Structured light projector and image acquisition system
WO2022148382A1 (en) Biometric acquisition and recognition system and method, and terminal device
CN218004132U (en) Handprint collecting system
KR20230107574A (en) Depth measurement via display
US20240005703A1 (en) Optical skin detection for face unlock
TWI675350B (en) Image Processing Apparatus And Method
KR20220037473A (en) Shooting device and authentication device
CN116631015A (en) Fingerprint acquisition system and method
JP2010039534A (en) Biological information acquisition device and personal authentication device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23752475

Country of ref document: EP

Kind code of ref document: A1