WO2020118538A1 - Image collection method and device - Google Patents

Image collection method and device Download PDF

Info

Publication number
WO2020118538A1
WO2020118538A1 PCT/CN2018/120445 CN2018120445W WO2020118538A1 WO 2020118538 A1 WO2020118538 A1 WO 2020118538A1 CN 2018120445 W CN2018120445 W CN 2018120445W WO 2020118538 A1 WO2020118538 A1 WO 2020118538A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
sub
band
image acquisition
Prior art date
Application number
PCT/CN2018/120445
Other languages
French (fr)
Chinese (zh)
Inventor
王星泽
闫静
舒远
Original Assignee
合刃科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(深圳)有限公司 filed Critical 合刃科技(深圳)有限公司
Priority to PCT/CN2018/120445 priority Critical patent/WO2020118538A1/en
Priority to CN201880071472.2A priority patent/CN111344711A/en
Publication of WO2020118538A1 publication Critical patent/WO2020118538A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1465Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light

Definitions

  • the invention relates to the field of image processing, in particular to an image acquisition method and device.
  • bar code and two-dimensional code recognition technology has the functions of data acquisition, automatic recognition, fast response, etc. It has the characteristics of convenient production, low cost, convenient use, perfect technology, etc., identity verification in daily life, online payment, Information identification and other scenarios provide users with a very convenient human-computer interaction experience.
  • the barcode and two-dimensional code recognition technology in the traditional technology includes two links, one is the collection and acquisition of the image information of the barcode and the two-dimensional code, and the second is the recognition of the image information of the barcode and the two-dimensional code.
  • the barcode or QR code image is unclear during the collection and acquisition of barcode and QR code image information, it will greatly affect the accuracy of barcode and QR code image recognition.
  • the bar code or two-dimensional code pattern printed on the high reflective arc surface or transparent/translucent material the brightness of the bar code or two-dimensional code image on the high reflective arc surface is very uneven due to the problem of the light source angle, and may even appear Half bright and half dark yin and yang codes.
  • the bar code or two-dimensional code image collected by the ordinary barcode reader is very unclear, and the amount of information is seriously lost, so that the subsequent recognition process will either fail to recognize specific information, or will Identify the wrong information.
  • the bar code and the two-dimensional code image are subjected to noise reduction processing, combined with adaptive
  • the brightness binary processing method solves the problem of uneven illumination of bar code and two-dimensional code images, but for the detection of two-dimensional codes of highly reflective arc surfaces or transparent/translucent materials, the two-dimensional codes are partially caused by perforation and staining. In the case of damage, the identification still has the problem of low accuracy or unrecognizable.
  • the recognition accuracy rate is relatively high Low technical problems, especially proposed an image acquisition method.
  • An image acquisition method including:
  • the acquisition of two or more material images under one or more optical parameters includes:
  • the optical parameter corresponding to the hyperspectral image acquisition element is a spectrum
  • the optical parameter corresponding to the polarized light image acquisition element Is the polarization direction.
  • the image fusion of the collected material images to obtain a target image for feature recognition includes:
  • the sub-band target image corresponding to each sub-band spectrum is subjected to inverse wavelet transform to obtain a target image for feature recognition.
  • the fusion of sub-band images belonging to the same sub-band spectrum into a sub-band target image according to a preset fusion strategy includes:
  • the area images of the subband images of the same subband spectrum in each of the areas are fused according to a preset fusion strategy to obtain a subband target image.
  • the fusion of the sub-band images of the same sub-band spectrum in each of the regions according to a preset fusion strategy includes:
  • the region images are fused by selection or weighted merging.
  • the method further includes:
  • An image acquisition device including:
  • the material image acquisition module collects two or more material images under one or more optical parameters, one material image corresponds to a parameter interval of the optical parameters, and the optical parameters include at least the polarization direction and the spectrum At least one of
  • the image fusion module performs image fusion on the collected material images to obtain a target image for feature recognition.
  • the material image acquisition module is used to acquire two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, the optical corresponding to the hyperspectral image acquisition element
  • the parameter is the spectrum
  • the optical parameter corresponding to the polarized light image acquisition element is the polarization direction.
  • the image fusion module is further used to perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum; sub-bands belonging to the same subband spectrum
  • the band images are fused into sub-band target images according to a preset fusion strategy; the sub-band target images corresponding to each sub-band spectrum are subjected to inverse wavelet transform to obtain target images for feature recognition.
  • the image fusion module is further used to divide the sub-band images belonging to the same sub-band spectrum into two or more regions; the sub-band images of the same sub-band spectrum are in each of the regions The images of the region are merged according to the preset fusion strategy to obtain the sub-band target image.
  • the image fusion module is further used to calculate the regional variance of the regional images corresponding to the sub-band images in each of the regions; calculate the similarity of each regional image according to the regional variance; The size of the similarity is fused with the area images by means of selection or weighted merging.
  • the material images in different polarization directions can be collected and then fused to avoid the reflection of polarized light generated in the specific illumination direction to the captured image.
  • Interference of feature information, and the glare covered feature information collected in other polarization directions is used as a supplement to the final collected image to be recognized; at the same time, for the transparent or semi-transparent image to be recognized, or the background color fades or Contaminated and unclear pictures to be recognized can be obtained by collecting material images under different spectral bands, and extracting the feature information contained in the material images under each spectrum band, and adding to the final collected image to be recognized through image fusion . This makes the collected image to be recognized contain the feature information in the picture to be recognized to the greatest extent, which further improves the accuracy of subsequent image recognition.
  • FIG. 1 is an architecture diagram of an image acquisition system in an embodiment
  • FIG. 3 is a schematic diagram of fusing material images with multiple polarization directions in an embodiment
  • FIG. 4 is a schematic diagram of fusing material images of multiple spectral bands in an embodiment
  • FIG. 5 is a flowchart of an image fusion process based on wavelet transform in an embodiment
  • FIG. 6 is a schematic diagram of the process of regional image fusion in an embodiment
  • FIG. 7 is a schematic diagram of an image acquisition device according to an embodiment
  • FIG. 8 is a schematic diagram of the composition of a computer system running the foregoing image acquisition method in one embodiment.
  • the present invention specifically proposes an image acquisition method and device And an image acquisition system for implementing the image acquisition method.
  • the implementation of the image acquisition method proposed by the present invention is based on the image acquisition system shown in FIG. 1.
  • the selection of the photosensitive element is different.
  • a camera or other photoelectric sensors such as CMOS and CCD are used to capture a two-dimensional code or barcode image to complete the collection of the image to be recognized.
  • a hyperspectral image acquisition element and/or The polarized light image acquisition element is used as a sensor device for image acquisition, in which the hyperspectral image acquisition element can acquire multi-spectral images under multiple spectral bands, and can extract corresponding spectral band images under a specific spectral band; while polarized light image acquisition The element can acquire multiple images in multiple polarization directions, or one image per specific polarization direction.
  • the image acquisition system may further include an image processing chip, which may merge material images of multiple spectral intervals collected by the hyperspectral image acquisition element into images to be recognized, or may integrate multiple polarized light image acquisition elements.
  • the material images in each irradiation direction are fused into the image to be recognized, or the material images collected by the hyperspectral image collection element and the polarized light image collection element can be fused into the image to be recognized.
  • the material images collected by the hyperspectral image collection element and/or the material images collected by the polarized light image collection element can also be directly sent to an external computer device, and the computer device can process and fuse these material images To get the image to be recognized.
  • the image acquisition method based on the image acquisition system includes:
  • Step S102 Acquire two or more material images under one or more optical parameters, one material image corresponding to a parameter interval of the optical parameters, the optical parameters at least including at least one of the polarization direction and the spectrum One kind.
  • the image acquisition system running this method only includes a polarized light image acquisition element.
  • the optical parameter on which image acquisition depends is a single optical parameter of the polarization direction, and the angle of different polarization directions corresponds to the polarization direction.
  • the parameter interval, the images at multiple polarization angles collected by the polarized light image collection element are the material images used for later image fusion. For example, in the coordinate system of the polarized image acquisition element itself, with an angle range of 45 degrees as a parameter interval, 8 pictures can be acquired as material images for subsequent image fusion.
  • the significance of collecting material pictures in multiple polarization directions through the polarized light image acquisition element is that when the image to be recognized is attached to a smooth medium surface, under a certain illumination angle (called Brewster angle, which is related to the refractive index of the substance) Relevant), the glare formed by reflection is polarized light, and the strong distinction between light and dark caused by glare is the biggest interference in the acquisition process of the identification image.
  • Brewster angle which is related to the refractive index of the substance
  • the image acquisition system running this method only includes a hyperspectral image acquisition element.
  • the optical parameter on which image acquisition depends is a single optical parameter of frequency spectrum, and the intervals of different wavelength lengths correspond to the parameters of the frequency spectrum.
  • images under multiple spectral bands (wavelength intervals) collected by hyperspectral image acquisition elements are material images used for later image fusion.
  • a specific wavelength length can be used as a parameter interval
  • a predetermined number of pictures can be collected as a material image for subsequent image fusion
  • a non-specific length wavelength interval such as ultraviolet, visible, near-infrared, and other light can be collected as a parameter interval.
  • the image collected by the hyperspectral image acquisition element is a hyperspectral image, and images under corresponding parameter intervals can be extracted as material images, respectively.
  • the significance of collecting material pictures in multiple polarization directions through the hyperspectral image acquisition element is that when the color and the background color of the image to be recognized are similar and unclear, for example, the white background material of the QR code/barcode changes due to long-term use. Yellow, or the black background due to the attachment of dirt makes it difficult to distinguish the original white background and the characteristic lines representing the features under visible light, the imaging situation of the image to be recognized at each wavelength interval can be collected by the hyperspectral image acquisition element.
  • the contrast between the background color of the recognition image and the material images of the characteristic part in different wavelength ranges are different, so using a certain image fusion method, the material image with better contrast can be used as the basic fusion Obtain a high-quality image to be recognized.
  • the image acquisition system running this method may include both polarized light image acquisition elements and hyperspectral image acquisition elements, which means that when the image acquisition system acquires multiple material pictures, not only different polarizations are considered
  • the direction also considers different spectral bands. For example, material image 1 can be collected in polarization direction 1 and band 1, material image 2 can be collected in polarization direction 1 and band 2, and material image 3 can be collected in polarization direction 2 and band 1.
  • the image acquisition system may also integrate a microstructure array for image acquisition, that is, a chip combined with a deep learning artificial intelligence algorithm.
  • the microarray structure may be a polarized microarray, a spectral microarray, or both.
  • the combined microarray can effectively collect optical information in multiple dimensions including light intensity, phase, spectrum, incidence, and polarization direction. This type of optical chip is highly integrated, and it is also very small and light.
  • Step S104 Perform image fusion on the collected material images to obtain a target image for feature recognition.
  • Image fusion is the process of extracting the image data collected by multiple source channels on the same target through image processing and computer technology to maximize the beneficial information in the respective channels and finally synthesizing high-quality images.
  • it is the process of extracting information such as edges, contours, textures and the like in multiple material images and integrating them into a high-quality image that can reflect the characteristics of the two-dimensional code/barcode to the maximum extent.
  • FIG. 3 shows the process of fusing material images collected by the polarized light image collecting element when the optical parameter is the polarization direction.
  • the Brewster effect may be generated on the surface of the two-dimensional code/barcode image, thereby generating polarized light that causes glare effects.
  • a large light spot appears in the picture to be recognized collected by the conventional image collection device, and the characteristic part in the original picture is obscured.
  • a partially clear picture can be obtained in some polarization directions, while in other polarization directions, clear pictures of other parts can be collected .
  • These partially clear material pictures are stitched together through image fusion to obtain an image to be recognized with good illumination.
  • the polarized light image collection element collects the material images at multiple polarization angles and then fuses them to avoid the glare generated by the illumination angle from interfering with the collected images with identification, and to make the fused image to be identified more clear.
  • FIG. 4 shows the process of fusing the material images collected by the hyperspectral image collection element when the optical parameter is the spectrum.
  • the traditional image acquisition device only the visible light is collected (Spectral optical signals such as optical signals of the frequency spectrum or optical signals of the spectrum where infrared rays are located) collected images to be identified, in the visible light spectrum range, blurred images to be identified, as shown by the naked eye, will be collected. This poses difficulties for subsequent image recognition.
  • the hyperspectral image acquisition component collects the material images in the multi-spectral spectrum band and then fuses it to avoid the blurry situation caused by the transparency or translucency of the picture to be recognized, fading, pollution and other reasons, making the fusion
  • the image to be recognized is clearer.
  • the image fusion of the material images is mainly based on the wavelet transform method.
  • the inherent characteristics of wavelet transform make it have the following advantages in image processing: perfect reconstruction ability to ensure that the signal has no information loss and redundant information during the decomposition process; the image is decomposed into a combination of average image and detail image, which respectively represent The different structure of the image, so it is easy to extract the structural information and detailed information of the original image; wavelet analysis provides a selective image that matches the direction of the human visual system.
  • the frequency domain analysis of an image through wavelet transform can decompose a material image into multiple images in the frequency domain, each image corresponds to the corresponding subband spectrum, and the decomposed image corresponding to the subband spectrum is Subband image.
  • the image fusion method by wavelet decomposition includes the following steps:
  • Step S202 Perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum.
  • the wavelet decomposition of the material image is to select an appropriate wavelet base to decompose the material image into multiple sub-images in the frequency domain.
  • the frequency of each wavelet base is the corresponding subband spectrum
  • each sub-image is the corresponding subband spectrum.
  • Subband image is not limited to a specific method, and the scale and decomposition coefficient of wavelet decomposition are also not limited, and can be selected according to actual conditions.
  • the low-band subband image corresponds to the background and background of the image
  • the high-band subband image corresponds to the edge and texture features of the image
  • the material image is decomposed using two orthogonal wavelet bases each time, then the first decomposition obtains two subband images of L1 and H1, and then continues to decompose H1 to obtain two subbands of L2 and H2 Image, and then continue to decompose H2 to get two subband images of L3 and H3.
  • the sub-band images of the material image finally obtained are L1, L2, L3 and H3.
  • Step S204 Fusion sub-band images belonging to the same sub-band spectrum into a sub-band target image according to a preset fusion strategy.
  • Step S206 Inverse wavelet transform is performed on the sub-band target images corresponding to each sub-band spectrum to obtain a target image for feature recognition.
  • the decomposed subband images of the material image A are LA1, LA2, LA3, and HA3
  • the decomposed subband images of the material image B are LB1, LB2, LB3, and HB3.
  • the fusion of sub-band images is to fuse LA1 and LB1 into LF1, LA2 and LB2 into LF2, LA2 and LB2 into LF2, LA3 and LB3 into LF3, and HA1 and HB1 into HF1.
  • the inverse transform of the frequency domain is inversely transformed into the image F to be recognized in the time domain by wavelet inverse transform.
  • fusing sub-band images belonging to the same sub-band spectrum into a sub-band target image according to a preset fusion strategy includes:
  • LA1 and LB1 can be divided into N ⁇ N regions, as shown in FIG. 6, which can be divided into 3 ⁇ 3 regions, and the position sequence numbers are from 1 to 9 in sequence.
  • the band image LA1 contains nine sub-regions A1 to A9, and LB1 contains nine corresponding sub-regions B1 to B9.
  • the fusion strategy may include two methods of selection and weighted merging, for example, for the two areas of position numbers 1 and 2 (the area image corresponding to the position in the subband image LA1 is A1 and A2, and the area corresponding to the position in the subband image LB1
  • the images are B1 and B2).
  • the features of LA1 at these two positions are not obvious, and the features of LB1 at these two positions are obvious. Therefore, for the two areas of position numbers 1 and 2, you can choose both B1 and B2 of LB1.
  • the area images are taken as the area images F1 and F2 in the two areas of the position numbers 1 and 2 of the fused sub-band image LF1.
  • LA1 and LB1 include features that are not prominent but have differences in these four positions, then A3, A4, A5 and A8 can be combined with B3, B4, B5 And B8 are weighted, merged and fused according to position, respectively, to obtain the area images F3, F4, F5 and F8 of the subband image LF1 in the four areas of position numbers 3, 4, 5 and 8.
  • the area images F1 to F9 at the position numbers 1 to 9 of the subband image LF1 of the final image to be recognized all contain better image features.
  • the fusion of the sub-band images of the same sub-band spectrum in each of the regions according to a preset fusion strategy includes:
  • R(x) be the subband coefficient matrix of the subband image after the material image x is subjected to wavelet decomposition
  • l is the position or position number of the subband image
  • R(x, p) represents the position l
  • the values of the decomposition coefficients at X, x (x, l) and u (x, l) respectively represent the variance and average value of the area size Q of the area of each subband matrix l in the image x, and:
  • the similarity of the material images A and B in the region Q can be calculated by the following formula:
  • the similarity M A,B (1) reflects the similarity of the variances of the two image regions.
  • the similarity M A,B (l) is closer to 1; when the area images of the sub-band images of the material images A and B are at l The more different the images, the closer the similarity MA , B (l) approaches zero.
  • whether to use the selected strategy or the weighted merge strategy depends on the size of the similarity M A,B (l), and a similarity threshold T can be introduced.
  • a similarity threshold T By comparing the similarity M A,B (1) and the size of the similarity threshold T to determine whether to use the selected strategy or the weighted merge strategy.
  • the similarity M A, B (l) is large, a weighted merge fusion strategy can be used, and when M A, B (l) is small, the selected strategy is adopted, namely:
  • the sub-band image of the material image with a large regional variance is selected in the region of the region.
  • the region with the larger regional variance is usually a region with more obvious features, including differences in edges, textures, and information. Areas with large regional variances are usually backgrounds, solid colors, etc.
  • Select the sub-band image of the material image with a large regional variance in the region image of the region then select the region image with more feature information such as edges and textures as the sub-band image of the fusion image in the region, so that it contains More feature information, which makes the recognition accuracy higher.
  • the sub-band image of the material image with a large regional variance is more selected in the region image of the region, and the sub-band image of the material image with a large regional variance is less selected in the region
  • W max is the weighting coefficient of the regional image of the sub-band image of the material image with a large regional variance in the region
  • W min is the region
  • W max is the weighting coefficient of the area image of the sub-band image of the material image with a small variance in this area
  • W max and W min are set as:
  • W max and W min may also be set according to actual scenarios. Choosing appropriate W max and W min can make the fused image retain more feature information.
  • the weighted fusion of the sub-band images of each material image in a certain area by this strategy not only retains the feature information contained in the sub-band image of the material image with relatively large regional variance, but also takes into account the regional variance.
  • the feature information contained in the sub-band image of the relatively small material image ensures that the feature information is not omitted, so that the sub-band image in the region after the fusion image can contain more feature information, so that the identifiable accuracy Degree is higher.
  • the above image acquisition method is mainly applicable to the application scenarios of two-dimensional code/barcode recognition. After the target image for feature recognition is obtained, the two-dimensional code/barcode recognition can be performed on the target image.
  • the above image acquisition method can also be applied to other image recognition fields, such as face recognition, vehicle detection, security inspection, and other application scenarios that require image acquisition and identification of features.
  • the image acquisition method of the present invention is not limited to the foregoing The two-dimensional code/barcode identification process.
  • an image acquisition device is also provided correspondingly. Specifically, as shown in FIG. 7, it includes a material image acquisition module 102 and an image fusion module 104, where:
  • the material image acquisition module 102 collects two or more material images under one or more optical parameters, one material image corresponds to a parameter interval of the optical parameters, and the optical parameters include at least the polarization direction and the spectrum At least one of them.
  • the image fusion module 104 performs image fusion on the collected material images to obtain a target image for feature recognition.
  • the material image acquisition module 102 is used to acquire two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, and the optical parameters corresponding to the hyperspectral image acquisition element are Spectrum, the optical parameter corresponding to the polarized light image acquisition element is the polarization direction.
  • the image fusion module 104 is further used to perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum; subband images belonging to the same subband spectrum According to the preset fusion strategy, the sub-band target image is fused; the sub-band target image corresponding to each sub-band spectrum is subjected to inverse wavelet transform to obtain the target image for feature recognition.
  • the image fusion module 104 is further used to divide the sub-band images belonging to the same sub-band spectrum into two or more regions; the sub-band images of the same sub-band spectrum are in the regions of each of the regions The images are fused according to a preset fusion strategy to obtain sub-band target images.
  • the image fusion module 104 is further used to calculate the regional variance of the regional images corresponding to the sub-band images in each of the regions; calculate the similarity of each regional image according to the regional variance; according to the similarity
  • the size of the degrees is selected or weighted to merge the area images.
  • the material images in different polarization directions can be collected and then fused to avoid the reflection of polarized light generated in the specific illumination direction to the captured image.
  • Interference of feature information, and the glare covered feature information collected in other polarization directions is used as a supplement to the final collected image to be recognized; at the same time, for the transparent or semi-transparent image to be recognized, or the background color fades or Contaminated and unclear pictures to be recognized can be obtained by collecting material images under different spectral bands, and extracting the feature information contained in the material images under each spectrum band, and adding to the final collected image to be recognized through image fusion . This makes the collected image to be recognized contain the feature information in the picture to be recognized to the greatest extent, which further improves the accuracy of subsequent image recognition.
  • FIG. 8 shows a computer system based on the von Neumann system that runs the above image acquisition method. Specifically, it may include an external input interface 1001 connected to the system bus, a processor 1002, a memory 1003, and an output interface 1004.
  • the external input interface 1001 may optionally include at least a network interface 10012 and a USB interface 10014.
  • the memory 1003 may include an external memory 10032 (for example, a hard disk, an optical disk, or a floppy disk, etc.) and an internal memory 10034.
  • the output interface 1004 may include at least a display screen 10042 and other devices.
  • the operation of the method is based on a computer program whose program files are stored in the external memory 10032 of the aforementioned computer system 10 based on the von Neumann system and are loaded into the internal memory 10034 during operation. It is then compiled into machine code and passed to the processor 1002 for execution, so that a logical material image acquisition module 102 and an image fusion module 104 are formed in the computer system 10 based on the von Neumann system.
  • the input parameters are received through the external input interface 1001 and passed to the memory 1003 for buffering, and then input to the processor 1002 for processing.
  • the processed result data may be cached in the memory 1003 for processing Subsequent processing, or passed to the output interface 1004 for output.

Abstract

An image collection method comprising: collecting two or more material images with one or more optical parameters, wherein one of the material images corresponds to a parameter interval of the optical parameters, and the optical parameter at least comprises at least one of a polarization direction and a spectrum (S102); and carrying out image fusion on the collected material images to obtain a target image for feature recognition (S104). Further disclosed is an image collection device. With this method, the amount of information in the collected image can be maximized, thereby improving the accuracy of image recognition.

Description

图像采集方法及装置Image acquisition method and device 技术领域Technical field
本发明涉及图像处理领域,特别涉及一种图像采集方法及装置。The invention relates to the field of image processing, in particular to an image acquisition method and device.
背景技术Background technique
在传统技术中,条码和二维码识别技术具有数据获取、自动识别、快速响应等功能,具有制作便捷,成本低廉,使用方便,技术完善等特点,在日常生活中的身份验证,在线支付,信息标识等场景中为使用者提供了十分便利的人机交互体验。In the traditional technology, bar code and two-dimensional code recognition technology has the functions of data acquisition, automatic recognition, fast response, etc. It has the characteristics of convenient production, low cost, convenient use, perfect technology, etc., identity verification in daily life, online payment, Information identification and other scenarios provide users with a very convenient human-computer interaction experience.
传统技术中的条码和二维码识别技术包含两个环节,其一为条码和二维码图像信息的采集和获取,其二为针对条码和二维码图像信息的识别。然而,若条码和二维码图像信息的采集和获取环节出现条码或二维码图像不清晰的情况,则会大大影响条码和二维码图像识别的准确率。The barcode and two-dimensional code recognition technology in the traditional technology includes two links, one is the collection and acquisition of the image information of the barcode and the two-dimensional code, and the second is the recognition of the image information of the barcode and the two-dimensional code. However, if the barcode or QR code image is unclear during the collection and acquisition of barcode and QR code image information, it will greatly affect the accuracy of barcode and QR code image recognition.
例如,印刻在高反光弧面或透明/半透明材料上面的条码或者二维码图案,因光源角度的问题造成了高反光弧面上的条码或二维码图像亮度非常不均匀,甚至会出现一半亮一半暗的阴阳码,这种情况下,普通读码器采集的条码或者二维码图象即十分不清楚,信息量遗失严重,使得后续的识别过程要么无法识别出特定信息,要么会识别出错误的信息。For example, the bar code or two-dimensional code pattern printed on the high reflective arc surface or transparent/translucent material, the brightness of the bar code or two-dimensional code image on the high reflective arc surface is very uneven due to the problem of the light source angle, and may even appear Half bright and half dark yin and yang codes. In this case, the bar code or two-dimensional code image collected by the ordinary barcode reader is very unclear, and the amount of information is seriously lost, so that the subsequent recognition process will either fail to recognize specific information, or will Identify the wrong information.
目前针对条码或二维码图像因拍摄时光照不均等原因而产生噪音这一问题,传统技术中虽然采用了中值滤波原理,对条码和二维码图像进行了降噪处理,结合了自适应亮度的二值化处理方法,解决了条码和二维码图像光照不均等问题,但是对于高反光弧面或透明/半透明材料二维码的检测,二维码因穿孔、污损等引起局部损坏情况下识别仍然存在准确度较低或无法识别的问题。At present, in view of the problem of noise caused by uneven illumination of the bar code or two-dimensional code image during shooting, although the median filter principle is used in the traditional technology, the bar code and the two-dimensional code image are subjected to noise reduction processing, combined with adaptive The brightness binary processing method solves the problem of uneven illumination of bar code and two-dimensional code images, but for the detection of two-dimensional codes of highly reflective arc surfaces or transparent/translucent materials, the two-dimensional codes are partially caused by perforation and staining. In the case of damage, the identification still has the problem of low accuracy or unrecognizable.
发明内容Summary of the invention
基于此,为提高图像采集得到的待识别图像包含的特征信息的信息量,从而解决现有技术中的图像识别由于图像采集过程由于光照角度和待识别图像本身较模糊而引起的识别准确率较低的技术问题,特提出了一种图像采集方法。Based on this, in order to increase the information amount of the feature information contained in the image to be recognized obtained by image collection, and to solve the problem of image recognition in the prior art due to the blurring of the illumination angle and the image to be recognized in the image collection process, the recognition accuracy rate is relatively high Low technical problems, especially proposed an image acquisition method.
一种图像采集方法,包括:An image acquisition method, including:
采集一个或一个以上光学参数下的两个或两个以上的素材图像,一所述素材图像对应所述光学参数的一参数区间,所述光学参数至少包括偏振方向和光谱中的至少一种;Acquiring two or more material images under one or more optical parameters, one material image corresponding to a parameter interval of the optical parameters, the optical parameters including at least one of polarization direction and spectrum;
对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像。Perform image fusion on the collected material images to obtain a target image for feature recognition.
在其中一个实施例中,所述采集一个或一个以上光学参数下的两个或两个以上的素材图像包括:In one of the embodiments, the acquisition of two or more material images under one or more optical parameters includes:
通过高光谱图像采集元件和/或偏振光图像采集元件采集两个或两个以上的素材图像,所述高光谱图像采集元件对应的光学参数为光谱,所述偏振光图像采集元件对应的光学参数为偏振方向。Acquiring two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, the optical parameter corresponding to the hyperspectral image acquisition element is a spectrum, and the optical parameter corresponding to the polarized light image acquisition element Is the polarization direction.
在其中一个实施例中,所述对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像包括:In one of the embodiments, the image fusion of the collected material images to obtain a target image for feature recognition includes:
对所述素材图像进行小波分解,得到两个或两个以上的与子带频谱对应的子带图像;Performing wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum;
将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像;Merge sub-band images belonging to the same sub-band spectrum into sub-band target images according to a preset fusion strategy;
对各子带频谱对应的子带目标图像进行小波逆变换,得到用于特征识别的目标图像。The sub-band target image corresponding to each sub-band spectrum is subjected to inverse wavelet transform to obtain a target image for feature recognition.
在其中一个实施例中,所述将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像包括:In one of the embodiments, the fusion of sub-band images belonging to the same sub-band spectrum into a sub-band target image according to a preset fusion strategy includes:
将属于同一子带频谱的子带图像划分为两个或两个以上的区域;Divide subband images belonging to the same subband spectrum into two or more regions;
将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合,得到子带目标图像。The area images of the subband images of the same subband spectrum in each of the areas are fused according to a preset fusion strategy to obtain a subband target image.
在其中一个实施例中,所述将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合包括:In one of the embodiments, the fusion of the sub-band images of the same sub-band spectrum in each of the regions according to a preset fusion strategy includes:
计算所述子带图像在各所述区域各自对应的区域图像的区域方差;Calculating the area variance of the area images corresponding to the sub-band images in each of the areas;
根据所述区域方差计算各区域图像的相似度;Calculate the similarity of images in each area according to the regional variance;
根据所述相似度的大小采用选择或加权合并的方式将所述区域图像融合。According to the size of the similarity, the region images are fused by selection or weighted merging.
在其中一个实施例中,所述得到用于特征识别的目标图像之后还包括:In one of the embodiments, after obtaining the target image for feature recognition, the method further includes:
对所述目标图像进行二维码/条形码识别。Perform two-dimensional code/barcode identification on the target image.
此外,为提高图像采集得到的待识别图像包含的特征信息的信息量,从而解决现有技术中的图像识别由于图像采集过程由于光照角度和待识别图像本身较模糊而引起的识别准确率较低的技术问题,特提出了一种图像采集装置。In addition, in order to improve the information amount of the feature information contained in the image to be recognized obtained by image collection, and to solve the problem of image recognition in the prior art due to the blurring of the illumination angle and the image to be recognized in the image collection process, the recognition accuracy is low The technical problem, especially proposed an image acquisition device.
一种图像采集装置,包括:An image acquisition device, including:
素材图像获取模块,采集一个或一个以上光学参数下的两个或两个以上的素材图像,一所述素材图像对应所述光学参数的一参数区间,所述光学参数至少包括偏振方向和光谱中的至少一种;The material image acquisition module collects two or more material images under one or more optical parameters, one material image corresponds to a parameter interval of the optical parameters, and the optical parameters include at least the polarization direction and the spectrum At least one of
图像融合模块,对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像。The image fusion module performs image fusion on the collected material images to obtain a target image for feature recognition.
在其中一个实施例中,所述素材图像获取模块用于通过高光谱图像采集元件和/或偏振光图像采集元件采集两个或两个以上的素材图像,所述高光谱图像采集元件对应的光学参数为光谱,所述偏振光图像采集元件对应的光学参数为偏振方向。In one of the embodiments, the material image acquisition module is used to acquire two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, the optical corresponding to the hyperspectral image acquisition element The parameter is the spectrum, and the optical parameter corresponding to the polarized light image acquisition element is the polarization direction.
在其中一个实施例中,所述图像融合模块还用于对所述素材图像进行小波分解,得到两个或两个以上的与子带频谱对应的子带图像;将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像;对各子带频谱对应的子带目标图像进行小波逆变换,得到用于特征识别的目标图像。In one of the embodiments, the image fusion module is further used to perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum; sub-bands belonging to the same subband spectrum The band images are fused into sub-band target images according to a preset fusion strategy; the sub-band target images corresponding to each sub-band spectrum are subjected to inverse wavelet transform to obtain target images for feature recognition.
在其中一个实施例中,所述图像融合模块还用于将属于同一子带频谱的子带图像划分为两个或两个以上的区域;将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合,得到子带目标图像。In one of the embodiments, the image fusion module is further used to divide the sub-band images belonging to the same sub-band spectrum into two or more regions; the sub-band images of the same sub-band spectrum are in each of the regions The images of the region are merged according to the preset fusion strategy to obtain the sub-band target image.
在其中一个实施例中,所述图像融合模块还用于计算所述子带图像在各所述区域各自对应的区域图像的区域方差;根据所述区域方差计算各区域图像的相似度;根据所述相似度的大小采用选择或加权合并的方式将所述区域图像融合。实施本发明实施例,将具有如下有益效果:In one of the embodiments, the image fusion module is further used to calculate the regional variance of the regional images corresponding to the sub-band images in each of the regions; calculate the similarity of each regional image according to the regional variance; The size of the similarity is fused with the area images by means of selection or weighted merging. The implementation of the embodiments of the present invention will have the following beneficial effects:
采用了上述图像采集方法及装置之后,对于光照方向引起的眩光的干扰,可通过采集不同的偏振方向下的素材图像,然后将其融合来避免在特定光照方向下反射产生偏振光对采集图像中特征信息的干扰,而将在其他偏振方向下采 集到的被眩光掩盖的特征信息作为补充融合到最终采集的待识别图像中;同时,对于透明或半透明的待识别图片,或者底色褪色或受污染发生模糊不清的待识别图片,可通过采集不同光谱谱段下的素材图像,并提取各谱段下的素材图像各自包含的特征信息,通过图像融合添加到最终采集的待识别图像中。这就使得采集的待识别图像最大限度地包含了待识别图片中的特征信息,进一步地提高了后续图像识别的准确率。After using the above image acquisition method and device, for the glare interference caused by the illumination direction, the material images in different polarization directions can be collected and then fused to avoid the reflection of polarized light generated in the specific illumination direction to the captured image. Interference of feature information, and the glare covered feature information collected in other polarization directions is used as a supplement to the final collected image to be recognized; at the same time, for the transparent or semi-transparent image to be recognized, or the background color fades or Contaminated and unclear pictures to be recognized can be obtained by collecting material images under different spectral bands, and extracting the feature information contained in the material images under each spectrum band, and adding to the final collected image to be recognized through image fusion . This makes the collected image to be recognized contain the feature information in the picture to be recognized to the greatest extent, which further improves the accuracy of subsequent image recognition.
附图说明BRIEF DESCRIPTION
下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。The drawings required in the embodiments or the description of the prior art will be briefly introduced below.
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly explain the embodiments of the present application or the technical solutions in the prior art, the following will briefly introduce the drawings required in the embodiments or the description of the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, without paying any creative work, other drawings can be obtained based on these drawings.
图1为一个实施例中一种图像采集系统的架构图;FIG. 1 is an architecture diagram of an image acquisition system in an embodiment;
图2为另一个实施例中一种图像采集方法的流程图;2 is a flowchart of an image acquisition method in another embodiment;
图3为一个实施例中将多个偏振方向的素材图像融合的示意图;3 is a schematic diagram of fusing material images with multiple polarization directions in an embodiment;
图4为一个实施例中将多个光谱谱段的素材图像融合的示意图;FIG. 4 is a schematic diagram of fusing material images of multiple spectral bands in an embodiment;
图5为一个实施例中基于小波变换的图像融合过程的流程图;5 is a flowchart of an image fusion process based on wavelet transform in an embodiment;
图6为一个实施例中区域图像融合过程的示意图;6 is a schematic diagram of the process of regional image fusion in an embodiment;
图7为一个实施例一种图像采集装置的示意图;7 is a schematic diagram of an image acquisition device according to an embodiment;
图8为一个实施例中运行前述图像采集方法的计算机系统的组成示意图。FIG. 8 is a schematic diagram of the composition of a computer system running the foregoing image acquisition method in one embodiment.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present invention.
为解决现有技术中的图像识别技术中,特别是条形码、二维码,例如QR 码(英文:Quick Response Code,中文:快速响应码)的识别过程中,由于容易受到光照方向不均匀、或条形码/二维码图片本身透明度的影响,从而导致的待识别图像质量不高、特征不明显,进而导致的图像识别准确度较低的技术问题,本发明特提出了一种图像采集方法、装置以及用于实施所述图像采集方法的一种图像采集系统。In order to solve the image recognition technology in the prior art, especially barcodes and two-dimensional codes, such as QR codes (English: Quick Response Code, Chinese: quick response code) in the recognition process, due to the uneven illumination direction, or The effect of the transparency of the bar code/two-dimensional code picture itself, which leads to the low quality and unobvious characteristics of the image to be recognized, which in turn leads to the technical problem of low image recognition accuracy. The present invention specifically proposes an image acquisition method and device And an image acquisition system for implementing the image acquisition method.
在一个实施例中,本发明提出的图像采集方法的实现基于如图1所示的图像采集系统,该系统与传统的二维码或条形码的读码器相比,感光元件的选取不同,传统技术中采用相机或其他CMOS、CCD等光电传感器拍摄一张二维码或条形码的图片即可完成待识别图像的采集,而在本实施例的图像采集系统中,使用了高光谱图像采集元件和/或偏振光图像采集元件作为图像采集的传感器设备,其中高光谱图像采集元件可采集多个光谱谱段下的多光谱图像,并可提取出特定谱段下对应的谱段图像;而偏振光图像采集元件可采集多个偏振方向下的多个图像,或者说每个特定的偏振方向采集一个图像。In one embodiment, the implementation of the image acquisition method proposed by the present invention is based on the image acquisition system shown in FIG. 1. Compared with the conventional two-dimensional code or barcode reader, the selection of the photosensitive element is different. In the technology, a camera or other photoelectric sensors such as CMOS and CCD are used to capture a two-dimensional code or barcode image to complete the collection of the image to be recognized. In the image acquisition system of this embodiment, a hyperspectral image acquisition element and/or The polarized light image acquisition element is used as a sensor device for image acquisition, in which the hyperspectral image acquisition element can acquire multi-spectral images under multiple spectral bands, and can extract corresponding spectral band images under a specific spectral band; while polarized light image acquisition The element can acquire multiple images in multiple polarization directions, or one image per specific polarization direction.
在本实施例中,该图像采集系统还可包括图像处理芯片,可将高光谱图像采集元件采集的多个光谱区间的素材图像融合为待识别图像,或者可将偏振光图像采集元件采集的多个照射方向下的素材图像融合为待识别图像,或者可将高光谱图像采集元件和偏振光图像采集元件采集的素材图像融合为待识别图像。In this embodiment, the image acquisition system may further include an image processing chip, which may merge material images of multiple spectral intervals collected by the hyperspectral image acquisition element into images to be recognized, or may integrate multiple polarized light image acquisition elements. The material images in each irradiation direction are fused into the image to be recognized, or the material images collected by the hyperspectral image collection element and the polarized light image collection element can be fused into the image to be recognized.
在其他实施例中,也可直接将高光谱图像采集元件采集的素材图像和/或偏振光图像采集元件采集的素材图像发送给外接的计算机设备,由计算机设备对这些素材图像进行处理和图像融合,得到待识别图像。In other embodiments, the material images collected by the hyperspectral image collection element and/or the material images collected by the polarized light image collection element can also be directly sent to an external computer device, and the computer device can process and fuse these material images To get the image to be recognized.
具体的,如图2所示,基于该图像采集系统的图像采集方法包括:Specifically, as shown in FIG. 2, the image acquisition method based on the image acquisition system includes:
步骤S102:采集一个或一个以上光学参数下的两个或两个以上的素材图像,一所述素材图像对应所述光学参数的一参数区间,所述光学参数至少包括偏振方向和光谱中的至少一种。Step S102: Acquire two or more material images under one or more optical parameters, one material image corresponding to a parameter interval of the optical parameters, the optical parameters at least including at least one of the polarization direction and the spectrum One kind.
在一实施例中,运行此方法的图像采集系统仅包含偏振光图像采集元件,图像采集所依赖的光学参数即为偏振方向这一单一光学参数,不同的偏振方向的角度即对应着偏振方向的参数区间,偏振光图像采集元件采集的多个偏振的角度下的图像,即为用于后期图像融合的素材图像。例如,在偏振光图像采集 元件自身的坐标系中,以45度的角度范围作为参数区间,可采集8张图片作为后续图像融合的素材图像。In one embodiment, the image acquisition system running this method only includes a polarized light image acquisition element. The optical parameter on which image acquisition depends is a single optical parameter of the polarization direction, and the angle of different polarization directions corresponds to the polarization direction. The parameter interval, the images at multiple polarization angles collected by the polarized light image collection element are the material images used for later image fusion. For example, in the coordinate system of the polarized image acquisition element itself, with an angle range of 45 degrees as a parameter interval, 8 pictures can be acquired as material images for subsequent image fusion.
通过偏振光图像采集元件采集多个偏振方向下的素材图片的意义在于,当待识别图像附着在光滑的介质表面时,在一定光照角度下(称为布儒斯特角,与物质的折射率有关),反射形成的眩光是偏振光,而眩光产生的光暗的强烈区分是对待识别图像的采集过程的最大干扰。在这种情况下,通过采集多个偏振方向下的素材图片即可避开引起眩光的偏振光,而通过其他偏振方向的素材图像融合得到不受眩光干扰的待识别图像。The significance of collecting material pictures in multiple polarization directions through the polarized light image acquisition element is that when the image to be recognized is attached to a smooth medium surface, under a certain illumination angle (called Brewster angle, which is related to the refractive index of the substance) Relevant), the glare formed by reflection is polarized light, and the strong distinction between light and dark caused by glare is the biggest interference in the acquisition process of the identification image. In this case, by collecting material pictures in multiple polarization directions, the polarized light causing glare can be avoided, and material images in other polarization directions are fused to obtain an image to be recognized without glare interference.
在另一个实施例中,运行此方法的图像采集系统仅包含高光谱图像采集元件,图像采集所依赖的光学参数即为频谱这一单一光学参数,不同的波长长度的区间即对应着频谱的参数区间,高光谱图像采集元件采集的多个谱段(波长区间)下的图像,即为用于后期图像融合的素材图像。例如,可以特定的波长长度作为参数区间,可采集预定数量的图片作为后续图像融合的素材图像,也可以紫外线、可见光、近红外光和其他光等非特定长度的波长区间作为参数区间采集。高光谱图像采集元件采集的图像为高光谱图像,可分别提取相应参数区间下的图像作为素材图像。In another embodiment, the image acquisition system running this method only includes a hyperspectral image acquisition element. The optical parameter on which image acquisition depends is a single optical parameter of frequency spectrum, and the intervals of different wavelength lengths correspond to the parameters of the frequency spectrum. Intervals, images under multiple spectral bands (wavelength intervals) collected by hyperspectral image acquisition elements are material images used for later image fusion. For example, a specific wavelength length can be used as a parameter interval, a predetermined number of pictures can be collected as a material image for subsequent image fusion, or a non-specific length wavelength interval such as ultraviolet, visible, near-infrared, and other light can be collected as a parameter interval. The image collected by the hyperspectral image acquisition element is a hyperspectral image, and images under corresponding parameter intervals can be extracted as material images, respectively.
通过高光谱图像采集元件采集多个偏振方向下的素材图片的意义在于,当待识别图像的纹路和底色之间颜色相近不清晰时,例如二维码/条形码的白色背景材质由于长期使用变黄,或由于污浊物附着发黑使得原先的白色背景与代表特征的特征纹路在可见光下难以区分时,通过高光谱图像采集元件可采集到各波长区间下该待识别图像的成像情况,由于待识别图像的底色与特征部分在不同波长区间的素材图像(即相应波长区间的谱段对应采集的图像)的对比度不同,因此使用一定的图像融合方法,可以对比度较优的素材图像作为基础融合得到质量较高的待识别图像。The significance of collecting material pictures in multiple polarization directions through the hyperspectral image acquisition element is that when the color and the background color of the image to be recognized are similar and unclear, for example, the white background material of the QR code/barcode changes due to long-term use. Yellow, or the black background due to the attachment of dirt makes it difficult to distinguish the original white background and the characteristic lines representing the features under visible light, the imaging situation of the image to be recognized at each wavelength interval can be collected by the hyperspectral image acquisition element. The contrast between the background color of the recognition image and the material images of the characteristic part in different wavelength ranges (that is, the images corresponding to the spectral bands of the corresponding wavelength range) are different, so using a certain image fusion method, the material image with better contrast can be used as the basic fusion Obtain a high-quality image to be recognized.
在另一个实施例中,运行此方法的图像采集系统可既包含偏振光图像采集元件,同时也包含高光谱图像采集元件,也就是说图像采集系统采集多张素材图片时,不仅考量不同的偏振方向,同时也考量不同的光谱波段。例如,可在偏振方向1,波段1采集素材图像1,在偏振方向1,波段2采集素材图像2,在偏振方向2,波段1采集素材图像3。当偏振方向的参数区间有m个,波长 区间的参数区间有n个,则理论上可采集m×n个素材图像,但实际应用中,可根据实际情况对其进行筛选,对采集的素材图像进行过滤,在计算能力适配的情况下,在该m×n个素材图像的集合中选择合适的素材图像用于图像融合。In another embodiment, the image acquisition system running this method may include both polarized light image acquisition elements and hyperspectral image acquisition elements, which means that when the image acquisition system acquires multiple material pictures, not only different polarizations are considered The direction also considers different spectral bands. For example, material image 1 can be collected in polarization direction 1 and band 1, material image 2 can be collected in polarization direction 1 and band 2, and material image 3 can be collected in polarization direction 2 and band 1. When there are m parameter sections in the polarization direction and n parameter sections in the wavelength section, theoretically m×n material images can be collected, but in practical applications, they can be screened according to the actual situation and the collected material images Filtering is performed, and when the computing power is adapted, a suitable material image is selected from the set of m×n material images for image fusion.
进一步的,在本实施例中,图像采集系统还可集成图像采集的微结构阵列,即结合了深度学习人工智能算法的芯片,微阵列结构可以是偏振微阵列、光谱微阵列,或两者的组合微阵列,能有效采集包括光强、相位、光谱、入射、偏振方向等多个维度的光学信息。此种光芯片集成度非常高,同时也非常小巧轻便。Further, in this embodiment, the image acquisition system may also integrate a microstructure array for image acquisition, that is, a chip combined with a deep learning artificial intelligence algorithm. The microarray structure may be a polarized microarray, a spectral microarray, or both. The combined microarray can effectively collect optical information in multiple dimensions including light intensity, phase, spectrum, incidence, and polarization direction. This type of optical chip is highly integrated, and it is also very small and light.
步骤S104:对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像。Step S104: Perform image fusion on the collected material images to obtain a target image for feature recognition.
图像融合,即为将多源信道所采集到的关于同一目标的图像数据经过图像处理和计算机技术等,最大限度的提取各自信道中的有利信息,最后综合成高质量的图像的过程。在本实施例中,即为将多个素材图像中各边缘、轮廓、纹路等信息提取,并综合成能够最大限度反应二维码/条形码特征的高质量图像的过程。Image fusion is the process of extracting the image data collected by multiple source channels on the same target through image processing and computer technology to maximize the beneficial information in the respective channels and finally synthesizing high-quality images. In this embodiment, it is the process of extracting information such as edges, contours, textures and the like in multiple material images and integrating them into a high-quality image that can reflect the characteristics of the two-dimensional code/barcode to the maximum extent.
参考图3所示,图3所示的为光学参数为偏振方向时,将偏振光图像采集元件采集的素材图像融合的过程。对于贴合在非水平面上的二维码/条形码图像而言,由于光照角度的问题,可能会在二维码/条形码图片的表面产生布儒斯特效应,从而产生造成眩光特效的偏振光,从而造成传统图像采集装置采集的待识别图片中,出现大块光斑,而遮掩了原图片中的特征部分。然而,通过偏振光图像采集元件采集多个偏振方向下的多个素材图片,则可在某些偏振方向上获得部分清晰的图片,而在另一些偏振方向上,采集到其他部分的清晰的图片。将这些部分清晰的素材图片通过图像融合拼接在一起,即可得到照度完好的待识别图像。Referring to FIG. 3, FIG. 3 shows the process of fusing material images collected by the polarized light image collecting element when the optical parameter is the polarization direction. For two-dimensional code/barcode images that are attached to non-horizontal planes, due to the problem of the illumination angle, the Brewster effect may be generated on the surface of the two-dimensional code/barcode image, thereby generating polarized light that causes glare effects. As a result, a large light spot appears in the picture to be recognized collected by the conventional image collection device, and the characteristic part in the original picture is obscured. However, by collecting multiple material pictures in multiple polarization directions through a polarized light image acquisition element, a partially clear picture can be obtained in some polarization directions, while in other polarization directions, clear pictures of other parts can be collected . These partially clear material pictures are stitched together through image fusion to obtain an image to be recognized with good illumination.
因此,偏振光图像采集元件采集多偏振角度下的素材图像然后将其融合,规避了光照角度产生的眩光对采集的带识别图像的干扰,使得融合后的待识别图像更加清晰。Therefore, the polarized light image collection element collects the material images at multiple polarization angles and then fuses them to avoid the glare generated by the illumination angle from interfering with the collected images with identification, and to make the fused image to be identified more clear.
再参考图4所示,图4所示的为光学参数为光谱时,将高光谱图像采集元件采集的素材图像融合的过程。对于图片本身的底色或特征部分由于褪色、污 染,等影响,或由于透明或半透明造成的模糊不清的二维码/条形码图像而言,通过传统图像采集装置(仅采集可见光所处的频谱的光信号或红外线所处的频谱的光信号等特定单一谱段的光信号)采集的待识别图片,在可见光的光谱范围内会采集到如肉眼所示的模糊不清的待识别图像,这就对后续的图像识别带来困难。Referring again to FIG. 4, FIG. 4 shows the process of fusing the material images collected by the hyperspectral image collection element when the optical parameter is the spectrum. For the background color or the characteristic part of the picture itself due to fading, pollution, etc., or the blurry two-dimensional code/barcode image caused by transparency or translucency, through the traditional image acquisition device (only the visible light is collected (Spectral optical signals such as optical signals of the frequency spectrum or optical signals of the spectrum where infrared rays are located) collected images to be identified, in the visible light spectrum range, blurred images to be identified, as shown by the naked eye, will be collected. This poses difficulties for subsequent image recognition.
通过高光谱图像采集元件采集多个谱段即多个波长区间下的多个素材图片,则可在某些波长区间下获得对比度相对较高的图片,或者部分区域对比度较高的图片,而在另一些波长区间下,获得其他部分对比度相对较高的图片。将这些对比度较高的素材图片通过图像融合,即可得到对比度较好的待识别图像,从而便于后续的图像识别。By collecting multiple material pictures in multiple spectral bands, ie multiple wavelength intervals, through hyperspectral image acquisition elements, pictures with relatively high contrast can be obtained in certain wavelength intervals, or pictures with high contrast in some areas, while At other wavelength intervals, images with relatively high contrast in other parts are obtained. By fusing these high-contrast material pictures through image fusion, an image with good contrast can be obtained, which is convenient for subsequent image recognition.
因此,高光谱图像采集元件采集多光谱谱段下的素材图像然后将其融合,规避了待识别图片本身的透明或半透明,褪色、污染等原因产生的模糊不清的情况,使得融合后的待识别图像更加清晰。Therefore, the hyperspectral image acquisition component collects the material images in the multi-spectral spectrum band and then fuses it to avoid the blurry situation caused by the transparency or translucency of the picture to be recognized, fading, pollution and other reasons, making the fusion The image to be recognized is clearer.
在本实施例中,对素材图像的图像融合主要基于小波变换的方法。小波变换的固有特性使其在图像处理中有如下优点:完善的重构能力,保证信号在分解过程中没有信息损失和冗余信息;把图像分解成平均图像和细节图像的组合,分别代表了图像的不同结构,因此容易提取原始图像的结构信息和细节信息;小波分析提供了与人类视觉系统方向相吻合的选择性图像。通过小波变换对一张图像进行频域分析,可将一张素材图像在频域分解成多个图像,每个图像即对应相应的子带频谱,与子带频谱对应的分解后的图像即为子带图像。In this embodiment, the image fusion of the material images is mainly based on the wavelet transform method. The inherent characteristics of wavelet transform make it have the following advantages in image processing: perfect reconstruction ability to ensure that the signal has no information loss and redundant information during the decomposition process; the image is decomposed into a combination of average image and detail image, which respectively represent The different structure of the image, so it is easy to extract the structural information and detailed information of the original image; wavelet analysis provides a selective image that matches the direction of the human visual system. The frequency domain analysis of an image through wavelet transform can decompose a material image into multiple images in the frequency domain, each image corresponds to the corresponding subband spectrum, and the decomposed image corresponding to the subband spectrum is Subband image.
具体的,如图5所示,通过小波分解的图像融合方式包括以下步骤:Specifically, as shown in FIG. 5, the image fusion method by wavelet decomposition includes the following steps:
步骤S202:对所述素材图像进行小波分解,得到两个或两个以上的与子带频谱对应的子带图像。Step S202: Perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum.
素材图像的小波分解即为选择合适的小波基对素材图像在频域分解为多个子图像,每个小波基的频率即为相应的子带频谱,每个子图像即为相应的子带频谱对应的子带图像。在本实施例中,小波基函数的选择不限于特定的方法,小波分解的尺度和分解系数也并不限定,可根据实际情况进行选择。The wavelet decomposition of the material image is to select an appropriate wavelet base to decompose the material image into multiple sub-images in the frequency domain. The frequency of each wavelet base is the corresponding subband spectrum, and each sub-image is the corresponding subband spectrum. Subband image. In this embodiment, the selection of the wavelet basis function is not limited to a specific method, and the scale and decomposition coefficient of wavelet decomposition are also not limited, and can be selected according to actual conditions.
在小波分解中,低频谱的子带图像对应的是图像的背景和底色部分,而高频谱的子带图像则对应着图像中边缘和纹理的特征部分。In wavelet decomposition, the low-band subband image corresponds to the background and background of the image, while the high-band subband image corresponds to the edge and texture features of the image.
例如,若尺度选择为1,素材图像每次分解使用两个正交的小波基,则在第一次分解得到L1和H1两个子带图像,然后将H1继续进行分解得到L2和H2两个子带图像,再将H2继续分解得到L3和H3两个子带图像。最终得到素材图像的子带图像为L1、L2、L3和H3。For example, if the scale is selected as 1, the material image is decomposed using two orthogonal wavelet bases each time, then the first decomposition obtains two subband images of L1 and H1, and then continues to decompose H1 to obtain two subbands of L2 and H2 Image, and then continue to decompose H2 to get two subband images of L3 and H3. The sub-band images of the material image finally obtained are L1, L2, L3 and H3.
步骤S204:将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像。Step S204: Fusion sub-band images belonging to the same sub-band spectrum into a sub-band target image according to a preset fusion strategy.
步骤S206:对各子带频谱对应的子带目标图像进行小波逆变换,得到用于特征识别的目标图像。Step S206: Inverse wavelet transform is performed on the sub-band target images corresponding to each sub-band spectrum to obtain a target image for feature recognition.
如上例中,对于素材图像A和B,素材图像A分解后的子带图像为LA1、LA2、LA3、HA3,素材图像B分解后的子带图像为LB1、LB2、LB3和HB3。子带图像的融合即为将LA1和LB1融合为LF1,LA2和LB2融合为LF2,LA2和LB2融合为LF2、LA3和LB3融合为LF3,HA1和HB1融合为HF1。As in the above example, for the material images A and B, the decomposed subband images of the material image A are LA1, LA2, LA3, and HA3, and the decomposed subband images of the material image B are LB1, LB2, LB3, and HB3. The fusion of sub-band images is to fuse LA1 and LB1 into LF1, LA2 and LB2 into LF2, LA2 and LB2 into LF2, LA3 and LB3 into LF3, and HA1 and HB1 into HF1.
在得到了LF1、LF2、LF3和HF1之后,则再通过小波逆变换,将频域的LF1、LF2、LF3和HF1逆变换为时域的待识别图像F。After LF1, LF2, LF3 and HF1 are obtained, the inverse transform of the frequency domain is inversely transformed into the image F to be recognized in the time domain by wavelet inverse transform.
进一步的,将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像包括:Further, fusing sub-band images belonging to the same sub-band spectrum into a sub-band target image according to a preset fusion strategy includes:
将属于同一子带频谱的子带图像划分为两个或两个以上的区域;将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合,得到子带目标图像。Divide the sub-band images belonging to the same sub-band spectrum into two or more regions; merge the sub-band images of the same sub-band spectrum in the region images of each of the regions according to a preset fusion strategy to obtain the sub-band target image.
以子带图像LA1和LB1融合为例,可将LA1和LB1各自划分为N×N个区域,如图6所示,可划分为3×3个区域,位置序号依次为1至9,则子带图像LA1包含A1至A9九个子区域,LB1包含B1至B9九个对应的子区域。Taking the fusion of sub-band images LA1 and LB1 as an example, LA1 and LB1 can be divided into N×N regions, as shown in FIG. 6, which can be divided into 3×3 regions, and the position sequence numbers are from 1 to 9 in sequence. The band image LA1 contains nine sub-regions A1 to A9, and LB1 contains nine corresponding sub-regions B1 to B9.
融合策略可包括选择和加权合并两种方式,如对于位置序号1和2两个区域(在子带图像LA1中对应位置的区域图像即为A1和A2,在子带图像LB1中对应位置的区域图像即为B1和B2),LA1在该两个位置的特征不明显,而LB1在该两个位置的特征明显,因此,对于位置序号1和2两个区域,可选择LB1的B1和B2两个区域图像作为融合后的子带图像LF1在位置序号1和2两个区域的区域图像F1和F2。The fusion strategy may include two methods of selection and weighted merging, for example, for the two areas of position numbers 1 and 2 (the area image corresponding to the position in the subband image LA1 is A1 and A2, and the area corresponding to the position in the subband image LB1 The images are B1 and B2). The features of LA1 at these two positions are not obvious, and the features of LB1 at these two positions are obvious. Therefore, for the two areas of position numbers 1 and 2, you can choose both B1 and B2 of LB1. The area images are taken as the area images F1 and F2 in the two areas of the position numbers 1 and 2 of the fused sub-band image LF1.
相应的,对于位置序号6、7和9,LA1在该三个位置的特征明显,而LB1 在该三个位置的特征明显,因此,对于位置序号6、7和9三个区域,可选择LA1的A6、A7和A9三个区域图像作为融合后的子带图像LF1在位置序号6、7和9的三个区域的区域图像F6、F7和F9。Correspondingly, for position numbers 6, 7 and 9, the characteristics of LA1 at these three positions are obvious, and the characteristics of LB1 at these three positions are obvious. Therefore, for the three areas of position numbers 6, 7 and 9, select LA1 The three area images of A6, A7 and A9 are used as the area images F6, F7 and F9 of the three areas of the fused subband image LF1 at the position numbers 6, 7 and 9.
而对于位置序号3、4、5和8四个位置,LA1和LB1在此四位置均包含了并不突出但具有差异的特征,则可将A3、A4、A5和A8与B3、B4、B5和B8各自按位置加权合并融合,得到子带图像LF1在位置序号3、4、5和8四个区域的区域图像F3、F4、F5和F8。For the four positions of position numbers 3, 4, 5 and 8, LA1 and LB1 include features that are not prominent but have differences in these four positions, then A3, A4, A5 and A8 can be combined with B3, B4, B5 And B8 are weighted, merged and fused according to position, respectively, to obtain the area images F3, F4, F5 and F8 of the subband image LF1 in the four areas of position numbers 3, 4, 5 and 8.
经上述融合后可看出,最终待识别图像的子带图像LF1在位置序号1至9处的区域图像F1至F9均包含了较好的图像特征。After the above fusion, it can be seen that the area images F1 to F9 at the position numbers 1 to 9 of the subband image LF1 of the final image to be recognized all contain better image features.
进一步的,将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合包括:Further, the fusion of the sub-band images of the same sub-band spectrum in each of the regions according to a preset fusion strategy includes:
计算所述子带图像在各所述区域各自对应的区域图像的区域方差;根据所述区域方差计算各区域图像的相似度;根据所述相似度的大小采用选择或加权合并的方式将所述区域图像融合。Calculating the regional variance of the regional images corresponding to the sub-band images in each of the regions; calculating the similarity of each regional image according to the regional variance; selecting or weighting and combining the similarities according to the similarity Regional image fusion.
在一个实施例中,设R(x)为素材图像x经过小波分解后的子带图像的子带系数矩阵,l为子带图像的位置或位置序号,R(x,p)表示在位置l处的分解系数的值,X(x,l)和u(x,l)分别表示图像x中各子带矩阵l位置的区域大小为Q的区域方差和平均值,且:In one embodiment, let R(x) be the subband coefficient matrix of the subband image after the material image x is subjected to wavelet decomposition, l is the position or position number of the subband image, and R(x, p) represents the position l The values of the decomposition coefficients at X, x (x, l) and u (x, l) respectively represent the variance and average value of the area size Q of the area of each subband matrix l in the image x, and:
X(x,l)=∑|R(x,q)-u(x,l)| 2 X(x,l)=∑|R(x,q)-u(x,l)| 2
其中q表示区域Q内的点。Where q represents a point within the area Q.
可通过以下公式计算素材图像A和B在区域Q的相似度:The similarity of the material images A and B in the region Q can be calculated by the following formula:
Figure PCTCN2018120445-appb-000001
Figure PCTCN2018120445-appb-000001
该相似度M A,B(l)反应了两幅图像区域方差的相似程度。当素材图像A和B在l处的子带图像的区域图像越相似,则相似度M A,B(l)越趋近于1;当素材图像A和B在l处的子带图像的区域图像越不同,则相似度M A,B(l)越趋近于0。 The similarity M A,B (1) reflects the similarity of the variances of the two image regions. When the area images of the sub-band images of the material images A and B at l are more similar, the similarity M A,B (l) is closer to 1; when the area images of the sub-band images of the material images A and B are at l The more different the images, the closer the similarity MA , B (l) approaches zero.
在本实施例中,区域图像在融合时,采用选择的策略还是加权合并的策略依赖于相似度M A,B(l)的大小,可引入相似度阈值T,通过比较相似度M A,B(l)和相似度阈值T的大小,来确定采用选择的策略还是加权合并的策略。当相似度 M A,B(l)较大时,可采用加权合并融合的策略,M A,B(l)较小时,采用选择的策略,即: In this embodiment, when the regional images are fused, whether to use the selected strategy or the weighted merge strategy depends on the size of the similarity M A,B (l), and a similarity threshold T can be introduced. By comparing the similarity M A,B (1) and the size of the similarity threshold T to determine whether to use the selected strategy or the weighted merge strategy. When the similarity M A, B (l) is large, a weighted merge fusion strategy can be used, and when M A, B (l) is small, the selected strategy is adopted, namely:
当M A,B(l)<T时,可采用: When M A, B (l) <T, you can use:
Figure PCTCN2018120445-appb-000002
Figure PCTCN2018120445-appb-000002
即选择区域方差较大的素材图像的子带图像在该区域的区域图像,这是由于区域方差较大的区域通常为特征较明显的区域,包含边缘、纹理、信息量较大的差异,而区域方差较大的区域通常为背景,纯色等区域。选择区域方差较大的素材图像的子带图像在该区域的区域图像,则选择了包含边缘、纹理等特征信息较多的区域图像作为融合后图像在该区域的子带图像,这样就包含了更多的特征信息,从而使得可识别的准确度较高。That is, the sub-band image of the material image with a large regional variance is selected in the region of the region. This is because the region with the larger regional variance is usually a region with more obvious features, including differences in edges, textures, and information. Areas with large regional variances are usually backgrounds, solid colors, etc. Select the sub-band image of the material image with a large regional variance in the region image of the region, then select the region image with more feature information such as edges and textures as the sub-band image of the fusion image in the region, so that it contains More feature information, which makes the recognition accuracy higher.
而当M A,B(l)>T时,可采用加权融合策略: When M A, B (l)>T, a weighted fusion strategy can be used:
Figure PCTCN2018120445-appb-000003
Figure PCTCN2018120445-appb-000003
也就是说,当相似度较大时,较多地选择区域方差较大的素材图像的子带图像在该区域的区域图像,较少地选择区域方差较大的素材图像的子带图像在该区域的区域图像,然后通过相应的加权系数W max和W min将其加权融合,其中W max为区域方差较大的素材图像的子带图像在该区域的区域图像的加权系数,W min为区域方差较小的素材图像的子带图像在该区域的区域图像的加权系数,且W max和W min的和为1。 That is to say, when the similarity is large, the sub-band image of the material image with a large regional variance is more selected in the region image of the region, and the sub-band image of the material image with a large regional variance is less selected in the region The regional image of the region, and then weighted and fused by corresponding weighting coefficients W max and W min , where W max is the weighting coefficient of the regional image of the sub-band image of the material image with a large regional variance in the region, and W min is the region The weighting coefficient of the area image of the sub-band image of the material image with a small variance in this area, and the sum of W max and W min is 1.
在本实施例中,W max和W min被设定为: In this embodiment, W max and W min are set as:
Figure PCTCN2018120445-appb-000004
Figure PCTCN2018120445-appb-000004
需要说明的是,在其他实施例中,W max和W min也可根据实际场景进行设置。选择合适的W max和W min可使得融合后的图像保留更多的特征信息。 It should be noted that, in other embodiments, W max and W min may also be set according to actual scenarios. Choosing appropriate W max and W min can make the fused image retain more feature information.
采用此种策略加权融合各素材图像的子带图像在某区域的区域图像,既较大限度地保留了区域方差相对较大的素材图像的子带图像中包含的特征信息,也兼顾了区域方差相对较小的素材图像的子带图像中包含的特征信息,保证了特征信息不至于遗漏,从而使得融合后图像在该区域的子带图像能够包含更多 的特征信息,从而使得可识别的准确度较高。The weighted fusion of the sub-band images of each material image in a certain area by this strategy not only retains the feature information contained in the sub-band image of the material image with relatively large regional variance, but also takes into account the regional variance. The feature information contained in the sub-band image of the relatively small material image ensures that the feature information is not omitted, so that the sub-band image in the region after the fusion image can contain more feature information, so that the identifiable accuracy Degree is higher.
上述图像采集方法主要可应用于二维码/条形码识别的应用场景,在得到用于特征识别的目标图像之后,即可对目标图像进行二维码/条形码识别。上述图像采集方法也可应用于其他图像识别的领域,例如人脸识别、车辆检测、安检安防等多种需要对图像采集并识别其中特征的应用场景,本发明的该图像采集方法并不限于前文所述的二维码/条形码识别过程。The above image acquisition method is mainly applicable to the application scenarios of two-dimensional code/barcode recognition. After the target image for feature recognition is obtained, the two-dimensional code/barcode recognition can be performed on the target image. The above image acquisition method can also be applied to other image recognition fields, such as face recognition, vehicle detection, security inspection, and other application scenarios that require image acquisition and identification of features. The image acquisition method of the present invention is not limited to the foregoing The two-dimensional code/barcode identification process.
在一个实施例中,针对上述图像采集方法,还与之对应地提供了一种图像采集装置,具体的,如图7所示,包括素材图像获取模块102和图像融合模块104,其中:In one embodiment, for the above image acquisition method, an image acquisition device is also provided correspondingly. Specifically, as shown in FIG. 7, it includes a material image acquisition module 102 and an image fusion module 104, where:
素材图像获取模块102,采集一个或一个以上光学参数下的两个或两个以上的素材图像,一所述素材图像对应所述光学参数的一参数区间,所述光学参数至少包括偏振方向和光谱中的至少一种。The material image acquisition module 102 collects two or more material images under one or more optical parameters, one material image corresponds to a parameter interval of the optical parameters, and the optical parameters include at least the polarization direction and the spectrum At least one of them.
图像融合模块104,对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像。The image fusion module 104 performs image fusion on the collected material images to obtain a target image for feature recognition.
在一个实施例中,素材图像获取模块102用于通过高光谱图像采集元件和/或偏振光图像采集元件采集两个或两个以上的素材图像,所述高光谱图像采集元件对应的光学参数为光谱,所述偏振光图像采集元件对应的光学参数为偏振方向。In one embodiment, the material image acquisition module 102 is used to acquire two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, and the optical parameters corresponding to the hyperspectral image acquisition element are Spectrum, the optical parameter corresponding to the polarized light image acquisition element is the polarization direction.
在一个实施例中,图像融合模块104还用于对所述素材图像进行小波分解,得到两个或两个以上的与子带频谱对应的子带图像;将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像;对各子带频谱对应的子带目标图像进行小波逆变换,得到用于特征识别的目标图像。In one embodiment, the image fusion module 104 is further used to perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum; subband images belonging to the same subband spectrum According to the preset fusion strategy, the sub-band target image is fused; the sub-band target image corresponding to each sub-band spectrum is subjected to inverse wavelet transform to obtain the target image for feature recognition.
在一个实施例中,图像融合模块104还用于将属于同一子带频谱的子带图像划分为两个或两个以上的区域;将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合,得到子带目标图像。In one embodiment, the image fusion module 104 is further used to divide the sub-band images belonging to the same sub-band spectrum into two or more regions; the sub-band images of the same sub-band spectrum are in the regions of each of the regions The images are fused according to a preset fusion strategy to obtain sub-band target images.
在一个实施例中,图像融合模块104还用于计算所述子带图像在各所述区域各自对应的区域图像的区域方差;根据所述区域方差计算各区域图像的相似度;根据所述相似度的大小采用选择或加权合并的方式将所述区域图像融合。In one embodiment, the image fusion module 104 is further used to calculate the regional variance of the regional images corresponding to the sub-band images in each of the regions; calculate the similarity of each regional image according to the regional variance; according to the similarity The size of the degrees is selected or weighted to merge the area images.
实施本发明实施例,将具有如下有益效果:The implementation of the embodiments of the present invention will have the following beneficial effects:
采用了上述图像采集方法及装置之后,对于光照方向引起的眩光的干扰,可通过采集不同的偏振方向下的素材图像,然后将其融合来避免在特定光照方向下反射产生偏振光对采集图像中特征信息的干扰,而将在其他偏振方向下采集到的被眩光掩盖的特征信息作为补充融合到最终采集的待识别图像中;同时,对于透明或半透明的待识别图片,或者底色褪色或受污染发生模糊不清的待识别图片,可通过采集不同光谱谱段下的素材图像,并提取各谱段下的素材图像各自包含的特征信息,通过图像融合添加到最终采集的待识别图像中。这就使得采集的待识别图像最大限度地包含了待识别图片中的特征信息,进一步地提高了后续图像识别的准确率。After the above image acquisition method and device are adopted, for the glare interference caused by the illumination direction, the material images in different polarization directions can be collected and then fused to avoid the reflection of polarized light generated in the specific illumination direction to the captured image. Interference of feature information, and the glare covered feature information collected in other polarization directions is used as a supplement to the final collected image to be recognized; at the same time, for the transparent or semi-transparent image to be recognized, or the background color fades or Contaminated and unclear pictures to be recognized can be obtained by collecting material images under different spectral bands, and extracting the feature information contained in the material images under each spectrum band, and adding to the final collected image to be recognized through image fusion . This makes the collected image to be recognized contain the feature information in the picture to be recognized to the greatest extent, which further improves the accuracy of subsequent image recognition.
在一个实施例中,如图8所示,图8展示了一种运行上述图像采集方法的基于冯诺依曼体系的计算机系统。具体的,可包括通过系统总线连接的外部输入接口1001、处理器1002、存储器1003和输出接口1004。其中,外部输入接口1001可选的可至少包括网络接口10012和USB接口10014。存储器1003可包括外存储器10032(例如硬盘、光盘或软盘等)和内存储器10034。输出接口1004可至少包括显示屏10042等设备。In one embodiment, as shown in FIG. 8, FIG. 8 shows a computer system based on the von Neumann system that runs the above image acquisition method. Specifically, it may include an external input interface 1001 connected to the system bus, a processor 1002, a memory 1003, and an output interface 1004. The external input interface 1001 may optionally include at least a network interface 10012 and a USB interface 10014. The memory 1003 may include an external memory 10032 (for example, a hard disk, an optical disk, or a floppy disk, etc.) and an internal memory 10034. The output interface 1004 may include at least a display screen 10042 and other devices.
在本实施例中,本方法的运行基于计算机程序,该计算机程序的程序文件存储于前述基于冯诺依曼体系的计算机系统10的外存储器10032中,在运行时被加载到内存储器10034中,然后被编译为机器码之后传递至处理器1002中执行,从而使得基于冯诺依曼体系的计算机系统10中形成逻辑上的素材图像获取模块102和图像融合模块104。且在上述图像采集方法执行过程中,输入的参数均通过外部输入接口1001接收,并传递至存储器1003中缓存,然后输入到处理器1002中进行处理,处理的结果数据或缓存于存储器1003中进行后续地处理,或被传递至输出接口1004进行输出。In this embodiment, the operation of the method is based on a computer program whose program files are stored in the external memory 10032 of the aforementioned computer system 10 based on the von Neumann system and are loaded into the internal memory 10034 during operation. It is then compiled into machine code and passed to the processor 1002 for execution, so that a logical material image acquisition module 102 and an image fusion module 104 are formed in the computer system 10 based on the von Neumann system. In addition, during the execution of the above image acquisition method, the input parameters are received through the external input interface 1001 and passed to the memory 1003 for buffering, and then input to the processor 1002 for processing. The processed result data may be cached in the memory 1003 for processing Subsequent processing, or passed to the output interface 1004 for output.
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。The above disclosure is only preferred embodiments of the present invention, and of course it cannot be used to limit the scope of the present invention. Therefore, equivalent changes made according to the claims of the present invention still fall within the scope of the present invention.

Claims (11)

  1. 一种图像采集方法,其特征在于,包括:An image acquisition method, characterized in that it includes:
    采集一个或一个以上光学参数下的两个或两个以上的素材图像,一所述素材图像对应所述光学参数的一参数区间,所述光学参数至少包括偏振方向和光谱中的至少一种;Acquiring two or more material images under one or more optical parameters, one material image corresponding to a parameter interval of the optical parameters, the optical parameters including at least one of polarization direction and spectrum;
    对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像。Perform image fusion on the collected material images to obtain a target image for feature recognition.
  2. 根据权利要求1所述的图像采集方法,其特征在于,所述采集一个或一个以上光学参数下的两个或两个以上的素材图像包括:The image acquisition method according to claim 1, wherein the acquisition of two or more material images under one or more optical parameters includes:
    通过高光谱图像采集元件和/或偏振光图像采集元件采集两个或两个以上的素材图像,所述高光谱图像采集元件对应的光学参数为光谱,所述偏振光图像采集元件对应的光学参数为偏振方向。Acquiring two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, the optical parameter corresponding to the hyperspectral image acquisition element is a spectrum, and the optical parameter corresponding to the polarized light image acquisition element Is the polarization direction.
  3. 根据权利要求1所述的图像采集方法,其特征在于,所述对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像包括:The image collection method according to claim 1, wherein the image fusion of the collected material images to obtain a target image for feature recognition includes:
    对所述素材图像进行小波分解,得到两个或两个以上的与子带频谱对应的子带图像;Performing wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum;
    将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像;Merge sub-band images belonging to the same sub-band spectrum into sub-band target images according to a preset fusion strategy;
    对各子带频谱对应的子带目标图像进行小波逆变换,得到用于特征识别的目标图像。The sub-band target image corresponding to each sub-band spectrum is subjected to inverse wavelet transform to obtain a target image for feature recognition.
  4. 根据权利要求3所述的图像采集方法,其特征在于,所述将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像包括:The image acquisition method according to claim 3, wherein the fusing the subband images belonging to the same subband spectrum into a subband target image according to a preset fusion strategy includes:
    将属于同一子带频谱的子带图像划分为两个或两个以上的区域;Divide subband images belonging to the same subband spectrum into two or more regions;
    将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合,得到子带目标图像。The area images of the subband images of the same subband spectrum in each of the areas are fused according to a preset fusion strategy to obtain a subband target image.
  5. 根据权利要求4所述的图像采集方法,其特征在于,所述将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合包括:The image acquisition method according to claim 4, wherein the fusion of the area images of the sub-band images of the same sub-band spectrum in each of the areas according to a preset fusion strategy includes:
    计算所述子带图像在各所述区域各自对应的区域图像的区域方差;Calculating the area variance of the area images corresponding to the sub-band images in each of the areas;
    根据所述区域方差计算各区域图像的相似度;Calculate the similarity of images in each area according to the regional variance;
    根据所述相似度的大小采用选择或加权合并的方式将所述区域图像融合。According to the size of the similarity, the region images are fused by selection or weighted merging.
  6. 根据权利要求1至5任一项所述的图像采集方法,其特征在于,所述得到用于特征识别的目标图像之后还包括:The image acquisition method according to any one of claims 1 to 5, wherein after obtaining the target image for feature recognition, the method further includes:
    对所述目标图像进行二维码/条形码识别。Perform two-dimensional code/barcode identification on the target image.
  7. 一种图像采集装置,其特征在于,包括:An image acquisition device is characterized by comprising:
    素材图像获取模块,采集一个或一个以上光学参数下的两个或两个以上的素材图像,一所述素材图像对应所述光学参数的一参数区间,所述光学参数至少包括偏振方向和光谱中的至少一种;The material image acquisition module collects two or more material images under one or more optical parameters, one material image corresponds to a parameter interval of the optical parameters, and the optical parameters include at least the polarization direction and the spectrum At least one of
    图像融合模块,对所述采集的素材图像进行图像融合,得到用于特征识别的目标图像。The image fusion module performs image fusion on the collected material images to obtain a target image for feature recognition.
  8. 根据权利要求7所述的图像采集装置,其特征在于,所述素材图像获取模块用于通过高光谱图像采集元件和/或偏振光图像采集元件采集两个或两个以上的素材图像,所述高光谱图像采集元件对应的光学参数为光谱,所述偏振光图像采集元件对应的光学参数为偏振方向。The image acquisition device according to claim 7, wherein the material image acquisition module is used to acquire two or more material images through a hyperspectral image acquisition element and/or a polarized light image acquisition element, the The optical parameter corresponding to the hyperspectral image acquisition element is the spectrum, and the optical parameter corresponding to the polarized light image acquisition element is the polarization direction.
  9. 根据权利要求8所述的图像采集装置,其特征在于,所述图像融合模块还用于对所述素材图像进行小波分解,得到两个或两个以上的与子带频谱对应的子带图像;将属于同一子带频谱的子带图像按照预设的融合策略融合为子带目标图像;对各子带频谱对应的子带目标图像进行小波逆变换,得到用于特征识别的目标图像。The image acquisition device according to claim 8, wherein the image fusion module is further used to perform wavelet decomposition on the material image to obtain two or more subband images corresponding to the subband spectrum; The sub-band images belonging to the same sub-band spectrum are fused into sub-band target images according to a preset fusion strategy; the sub-band target images corresponding to each sub-band spectrum are subjected to inverse wavelet transformation to obtain a target image for feature recognition.
  10. 根据权利要求9所述的图像采集装置,其特征在于,所述图像融合模块还用于将属于同一子带频谱的子带图像划分为两个或两个以上的区域;将同一子带频谱的子带图像在各所述区域的区域图像按照预设的融合策略融合,得到子带目标图像。The image acquisition device according to claim 9, wherein the image fusion module is further used to divide the subband images belonging to the same subband spectrum into two or more regions; The region images of the sub-band images in each of the regions are fused according to a preset fusion strategy to obtain a sub-band target image.
  11. 根据权利要求10所述的图像采集装置,其特征在于,所述图像融合模块还用于计算所述子带图像在各所述区域各自对应的区域图像的区域方差;根据所述区域方差计算各区域图像的相似度;根据所述相似度的大小采用选择或加权合并的方式将所述区域图像融合。The image acquisition device according to claim 10, wherein the image fusion module is further used to calculate the regional variance of the regional images corresponding to the subband images in each of the regions; The similarity of the regional images; according to the size of the similarity, the regional images are fused by selection or weighted merging.
PCT/CN2018/120445 2018-12-12 2018-12-12 Image collection method and device WO2020118538A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/120445 WO2020118538A1 (en) 2018-12-12 2018-12-12 Image collection method and device
CN201880071472.2A CN111344711A (en) 2018-12-12 2018-12-12 Image acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/120445 WO2020118538A1 (en) 2018-12-12 2018-12-12 Image collection method and device

Publications (1)

Publication Number Publication Date
WO2020118538A1 true WO2020118538A1 (en) 2020-06-18

Family

ID=71075426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120445 WO2020118538A1 (en) 2018-12-12 2018-12-12 Image collection method and device

Country Status (2)

Country Link
CN (1) CN111344711A (en)
WO (1) WO2020118538A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163627B (en) * 2020-10-09 2024-01-23 北京环境特性研究所 Fusion image generation method, device and system of target object
CN112766250B (en) * 2020-12-28 2021-12-21 合肥联宝信息技术有限公司 Image processing method, device and computer readable storage medium
CN115265781B (en) * 2022-07-14 2024-04-09 长春理工大学 System and method for rapidly acquiring plane array polarized spectrum image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140061310A1 (en) * 2012-08-31 2014-03-06 International Business Machines Corporation Two-dimensional barcode to avoid unintentional scanning
CN106033599A (en) * 2015-03-20 2016-10-19 南京理工大学 Visible light enhancement method based on polarized imaging
CN107392072A (en) * 2017-07-13 2017-11-24 广州市银科电子有限公司 The bill image in 2 D code acquisition method and device of a kind of multi-wavelength multiple light courcess
CN107918748A (en) * 2017-10-27 2018-04-17 南京理工大学 A kind of multispectral two-dimension code recognition device and method
CN108898569A (en) * 2018-05-31 2018-11-27 安徽大学 A kind of fusion method being directed to visible light and infrared remote sensing image and its fusion results evaluation method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1636556A4 (en) * 2003-06-25 2007-09-12 Univ Akron Multispectral, multifusion, laser-polarimetric optical imaging system
CN105139367A (en) * 2015-07-27 2015-12-09 中国科学院光电技术研究所 Visible-light polarization image fusion method based on non-subsampled shearlets
CN108090490A (en) * 2016-11-21 2018-05-29 南京理工大学 A kind of Stealthy Target detecting system and method based on multispectral polarization imaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140061310A1 (en) * 2012-08-31 2014-03-06 International Business Machines Corporation Two-dimensional barcode to avoid unintentional scanning
CN106033599A (en) * 2015-03-20 2016-10-19 南京理工大学 Visible light enhancement method based on polarized imaging
CN107392072A (en) * 2017-07-13 2017-11-24 广州市银科电子有限公司 The bill image in 2 D code acquisition method and device of a kind of multi-wavelength multiple light courcess
CN107918748A (en) * 2017-10-27 2018-04-17 南京理工大学 A kind of multispectral two-dimension code recognition device and method
CN108898569A (en) * 2018-05-31 2018-11-27 安徽大学 A kind of fusion method being directed to visible light and infrared remote sensing image and its fusion results evaluation method

Also Published As

Publication number Publication date
CN111344711A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
Sencar et al. Overview of state-of-the-art in digital image forensics
Vanmali et al. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility
Lin et al. Recent advances in passive digital image security forensics: A brief review
Birajdar et al. Digital image forgery detection using passive techniques: A survey
Hordley Scene illuminant estimation: past, present, and future
Zou et al. Illumination invariant face recognition: A survey
CN111046703A (en) Face anti-counterfeiting detection method and device and multi-view camera
WO2020118538A1 (en) Image collection method and device
Raghavendra et al. Exploring the usefulness of light field cameras for biometrics: An empirical study on face and iris recognition
CN103247036A (en) Multiple-exposure image fusion method and device
CN104683767A (en) Fog penetrating image generation method and device
Agarwal et al. Image forgery detection using multi scale entropy filter and local phase quantization
Liu et al. Detect image splicing with artificial blurred boundary
WO2015150292A1 (en) Image processing method and apparatus
Sandhan et al. Anti-glare: Tightly constrained optimization for eyeglass reflection removal
Piuri et al. Fingerprint biometrics via low-cost sensors and webcams
CN111695406A (en) Face recognition anti-spoofing method, system and terminal based on infrared ray
Chan et al. Illumination invariant face recognition: a survey
Botezatu et al. Fun selfie filters in face recognition: Impact assessment and removal
Kınlı et al. Modeling the lighting in scenes as style for auto white-balance correction
Wang et al. The principle and application of hyperspectral imaging technology in detection of handwriting
Khalil et al. A review of fingerprint pre-processing using a mobile phone
CN108875472B (en) Image acquisition device and face identity verification method based on image acquisition device
Raja Active and Passive Detection of Image Forgery: A Review Analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942748

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18942748

Country of ref document: EP

Kind code of ref document: A1