WO2022247840A1 - 光源光谱和多光谱反射率图像获取方法、装置及电子设备 - Google Patents

光源光谱和多光谱反射率图像获取方法、装置及电子设备 Download PDF

Info

Publication number
WO2022247840A1
WO2022247840A1 PCT/CN2022/094817 CN2022094817W WO2022247840A1 WO 2022247840 A1 WO2022247840 A1 WO 2022247840A1 CN 2022094817 W CN2022094817 W CN 2022094817W WO 2022247840 A1 WO2022247840 A1 WO 2022247840A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
multispectral
value
light source
pixel
Prior art date
Application number
PCT/CN2022/094817
Other languages
English (en)
French (fr)
Inventor
刘敏
龚冰冰
师少光
黄泽铗
张丁军
江隆业
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2022247840A1 publication Critical patent/WO2022247840A1/zh
Priority to US18/373,729 priority Critical patent/US20240021021A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/42Photometry, e.g. photographic exposure meter using electric radiation detectors
    • G01J1/4204Photometry, e.g. photographic exposure meter using electric radiation detectors with determination of ambient light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2803Investigating the spectrum using photoelectric array detector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • G01J2003/2826Multispectral imaging, e.g. filter imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/42Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
    • G01J2003/425Reflectance

Definitions

  • the present application relates to the technical field of multispectral detection, and in particular to a method, device and electronic equipment for acquiring a light source spectrum, and a method, device and electronic equipment for acquiring multispectral reflectance images.
  • the technologies of multispectral imaging and multispectral analysis can not only obtain the information of the spatial image dimension, but also obtain the information of the spectral dimension.
  • the principle of multispectral imaging is to divide the incident light into several narrow bands of light and image them on multispectral detectors respectively, so as to obtain images of different spectral bands and form multispectral three-dimensional data.
  • the multispectral data processing method is to extract texture from the image dimension, such as local binary pattern (local binary pattern, LBP) texture extraction, gray level co-occurrence matrix texture extraction, oriented gradient histogram (histogram of oriented gradient, HOG) texture extraction, etc.; Material composition and color-related features are extracted from the spectral dimension. Finally, the image dimension information and spectral dimension information are fused to analyze the target object.
  • the multi-spectral reflectance image can be obtained by dividing the response value spectrum of each pixel of the multi-spectral image by the spectrum of the ambient light.
  • the property is related and does not change with the light source.
  • the spectrum of the ambient light is the light source spectrum, which refers to the light source spectrum incident on the surface of the multi-spectral shooting object. It can be seen that obtaining a high-precision ambient light spectrum is very important for obtaining a high-precision multispectral reflectance image. Therefore, how to obtain the spectrum of ambient light with high precision is an urgent problem to be solved.
  • Embodiments of the present application provide a light source spectrum acquisition method, device, and electronic equipment, as well as a multi-spectral reflectance image acquisition method, device, and electronic equipment, capable of obtaining a spectrum of ambient light with high precision.
  • an embodiment of the present application provides a method for acquiring a light source spectrum, including:
  • the target area whose gray value is smaller than the threshold, or the gray value is smaller than or equal to the threshold is found, and the light source spectral response value is calculated based on the target area, which can improve the accuracy of the obtained light source spectrum.
  • the gray value corresponding to each pixel in the gray image is calculated according to the three-channel value of the pixel in the RGB image.
  • R, G and B represent each pixel in the RGB image
  • abs represents the absolute value function.
  • the RGB image after converting the RGB image into a grayscale image, it further includes:
  • a threshold is determined according to the grayscale image.
  • determining the threshold according to the grayscale image includes: performing histogram statistics on the grayscale image, and determining the threshold according to an interval parameter of a minimum value interval in the histogram statistical result.
  • the determination of the threshold according to the interval parameter of the smallest numerical interval in the histogram statistical results includes:
  • the threshold is determined according to the interval boundary value and pixel ratio of the minimum value interval in the histogram statistical results.
  • the calculating the light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image includes:
  • an embodiment of the present application provides a method for acquiring a multispectral reflectance image, including:
  • the target area whose gray value is less than the threshold, or the gray value is less than or equal to the threshold is found, and the light source spectral response value is calculated based on the target area, which can improve the accuracy of the obtained light source spectrum, thereby improving the multi-spectral reflectance The precision of the image.
  • the gray value corresponding to each pixel in the gray image is calculated according to the three-channel value of the pixel in the RGB image.
  • R, G and B represent each pixel in the RGB image
  • abs represents the absolute value function.
  • the RGB image after converting the RGB image into a grayscale image, it further includes:
  • a threshold is determined according to the grayscale image.
  • determining the threshold according to the grayscale image includes: performing histogram statistics on the grayscale image, and determining the threshold according to an interval parameter of a minimum value interval in the histogram statistical result.
  • the determination of the threshold according to the interval parameter of the smallest numerical interval in the histogram statistical results includes:
  • the threshold is determined according to the interval boundary value and pixel ratio of the minimum value interval in the histogram statistical results.
  • the calculating the light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image includes:
  • the acquiring a multispectral reflectance image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value includes:
  • an embodiment of the present application provides a light source spectrum acquisition device, including:
  • An acquisition module configured to acquire a multispectral image, and determine the multispectral response value of each pixel in the multispectral image
  • Reconstruction module for reconstructing RGB image according to described multispectral image
  • a conversion module for converting the RGB image into a grayscale image
  • the first calculation module is used to determine the target area whose gray value is less than the threshold or the gray value is less than or equal to the threshold in the gray image, according to the multi-spectrum of each pixel corresponding to the target area in the multi-spectral image Response Calculates the spectral response of the light source.
  • an embodiment of the present application provides a multispectral reflectance image acquisition device, including:
  • An acquisition module configured to acquire a multispectral image, and determine the multispectral response value of each pixel in the multispectral image
  • Reconstruction module for reconstructing RGB image according to described multispectral image
  • a conversion module for converting the RGB image into a grayscale image
  • the first calculation module is used to determine the target area whose gray value is less than the threshold or the gray value is less than or equal to the threshold in the gray image, according to the multi-spectrum of each pixel corresponding to the target area in the multi-spectral image
  • the response value calculates the light source spectral response value
  • the second calculation module is configured to acquire a multispectral reflectance image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
  • an embodiment of the present application provides an electronic device, including: a memory, a processor, and a computer program stored in the memory and operable on the processor, and the processor executes the computer program When implementing the light source spectrum acquisition method as described in the first aspect or any implementation manner of the first aspect, and/or implementing the multispectral reflectance image acquisition method as described in the second aspect or any implementation manner of the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, any one of the first aspect or the first aspect is implemented.
  • the embodiment of the present application provides a computer program product, which, when the computer program product runs on the electronic device, causes the electronic device to execute the light source spectrum acquisition method described in the first aspect or any implementation manner of the first aspect , and/or, implementing the multispectral reflectance image acquisition method as described in the second aspect or any implementation manner of the second aspect.
  • Fig. 1 is a schematic diagram of the implementation flow of a multi-spectral reflectance image acquisition method provided by an embodiment of the present application
  • Fig. 2 is a schematic diagram of statistical results of histogram statistics of grayscale images provided by an embodiment of the present application
  • Fig. 3 is a schematic diagram of the implementation flow of another multi-spectral reflectance image acquisition method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of the implementation flow of a living body detection method provided by an embodiment of the present application.
  • Fig. 5 is a schematic flow diagram of another living body detection method provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a light source spectrum acquisition device provided by an embodiment of the present application.
  • Fig. 7 is a schematic diagram of another light source spectrum acquisition device provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a multi-spectral reflectance image acquisition device provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of another multi-spectral reflectance image acquisition device provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • One embodiment or “some embodiments” or the like described in the specification of the present application means that a specific feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • the first one is a light source spectrum estimation method based on the white world. This method finds the brightest region of the multispectral image, and obtains its average spectrum as the light source spectrum. This method has a better restoration effect when the brightest area is a white area.
  • the second is a gray-world-based light source spectrum estimation method. This method obtains the average spectrum of the whole multispectral image as the light source spectrum. This method has a better restoration effect on scenes with rich colors.
  • Both methods estimate the light source spectrum based on a blur estimate of the entire multispectral image. For example, based on the light source spectrum estimation of the white world, the brightest area in the multispectral image is taken as the light source spectrum. If the brightest area is not white, the estimation error will be large. Another example is the light source spectrum estimation based on the gray world. The average value of all pixels in the multispectral image is taken as the light source spectrum. If there are few white areas in the image and a large area of single color, the estimation error will be large.
  • the embodiment of the present application provides a method for acquiring a multispectral reflectance image, which can be obtained by acquiring a multispectral image , locating the light source area in the multispectral image, so as to determine the light source spectrum according to the multispectral information of the light source area.
  • Fig. 1 is a schematic flow diagram of an implementation process of a multi-spectral reflectance image acquisition method provided by an embodiment of the present application, and the multi-spectral reflectance image acquisition method in this embodiment can be executed by an electronic device.
  • Electronic devices include, but are not limited to, computers, tablets, servers, mobile phones, or multispectral cameras, etc.
  • Servers include but are not limited to stand-alone servers or cloud servers.
  • the multi-spectral reflectance image acquisition method in this embodiment is applicable to the situation where it is necessary to estimate the spectrum of the light source (or the approximate spectrum of the light source) in the current environment. As shown in Figure 1, the multispectral reflectance image acquisition method may include steps S110 to S150.
  • the multispectral image is a single multispectral image.
  • the multispectral image of any scene (there is ambient light or light source in the scene) is collected by a multispectral camera.
  • the information contained in a single multispectral image includes the response value information of each pixel, and the response value information represents the response of the light reflected to the multispectral camera on the multispectral camera.
  • the response value information varies with the intensity of the light source, the shape of the spectrum of the light source, and the lighting direction of the light source.
  • the number of channels of the multi-spectral camera can range from a few to more than a dozen, such as eight channels, nine channels, or sixteen channels.
  • the number of channels of the multispectral camera and the wavelength band of each channel are not specifically limited.
  • a nine-channel multispectral camera is used as an example of the multispectral camera. It should be understood that the exemplary description cannot be construed as a specific limitation on this embodiment.
  • the multispectral camera is a nine-channel multispectral camera, and each pixel of the nine-channel multispectral camera can obtain nine x1, x2, x3, x4, x5, x6, x7, x8, x9 Response. That is, the multispectral response value of each pixel corresponds to nine response values of nine channels.
  • x1 represents the response value of the first channel with q1 response curve characteristics
  • x2 represents the response value of the second channel with q2 response curve characteristics
  • x3 represents the response value of the third channel with q3 response curve characteristics
  • x9 represents the response value of the third channel with the characteristic of the q9 response curve. That is to say, xi represents the response value of the i-th channel with the characteristics of the qi response curve, and the value of i is an integer ranging from 1 to 9.
  • Each pixel in an RGB image has three channel response values, namely the R value of the R channel, the G value of the G channel, and the B value of the B channel.
  • Reconstructing an RGB image from a multispectral image is to calculate the R value, G value and B value of each pixel in the multispectral image according to the multispectral response value of the pixel.
  • step S120 reconstructing an RGB image according to the multispectral image includes the following steps S121 to S124.
  • the QE response curve matrix of the nine channels of the multispectral camera is obtained, and the QE response curve matrix can be recorded as q1, q2, q3, q4, q5, q6, q7, q8, q9.
  • matrix q1 is the response curve of the first channel
  • matrix q2 is the response curve of the second channel
  • matrix q9 is the response curve of the ninth channel. That is to say, the matrix qj is the response curve of the jth channel, and j is an integer ranging from 1 to 9.
  • these response curves can be obtained through testing. These curves can be pre-stored in the memory of the electronic device after the test and can be recalled when needed.
  • linear fitting method uses the linear fitting method to linearly fit the r curve, g curve and b curve with nine-channel response curves, namely q1, q2, q3, q4, q5, q6, q7, q8, q9 curves .
  • the formula for the linear fit is as follows:
  • step S110 it is determined that the nine-channel response value of a certain pixel in the multispectral image is: x1, x2, x3, x4, x5, x6, x7, x8, x9, and the fitting parameters are calculated according to step S123, and in step In S124, a fitting calculation is performed according to the fitting parameters and the nine-channel response values of the pixel to obtain the R value, G value and B value of the pixel.
  • the formula is as follows:
  • R a1*x1+a2*x2+a3*x3+a4*x4+a5*x5+a6*x6+a7*x7+a8*x8+a9*x9;
  • G b1*x1+b2*x2+b3*x3+b4*x4+b5*x5+b6*x6+b7*x7+b8*x8+b9*x9;
  • the R value, G value and B value of each pixel in the multispectral image are obtained, and the RGB image corresponding to the entire multispectral image is obtained, that is, the RGB image is reconstructed according to the multispectral image.
  • the RGB image after the RGB image is reconstructed, the RGB image can also be white-balanced to obtain a white-balanced RGB image, which can be recorded as an RGB_wb image.
  • the RGB_wb image is converted into a grayscale image.
  • the existing white balance method such as gray world, white world or automatic threshold
  • the area with a deta value close to 0 obtained in the subsequent step S140 can better correspond to the gray or white area, and the area selection result can be obtained more accurately, thereby obtaining a more accurate light source spectrum.
  • the grayscale image may be called a deta image.
  • the gray value corresponding to each pixel in the gray image is calculated according to the multi-channel values of the pixel in the RGB image.
  • the R value, G value and B value of the three channels of R, G and B of each pixel in the RGB image calculate the gray value (or deta value) of the pixel, according to the gray value (or deta value) of each pixel ) to get the grayscale image (or deta image) corresponding to the RGB image. That is to say, the grayscale value (or deta value) corresponding to each pixel of the grayscale image (or deta image) is calculated according to the multi-channel values of the pixel in the RGB image, ie R value, G value and B value.
  • extract the R, G and B three channels of the RGB image, and for each pixel, calculate the pixel according to the formula deta abs(1-G/B)+abs(1-R/B) The corresponding deta value, and assign the deta value as a gray value to the pixel of the gray image, and obtain the deta image according to the gray value of each pixel, wherein abs in the formula represents an absolute value function.
  • S140 Determine a target area in the grayscale image whose grayscale value is smaller than a threshold, and calculate a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
  • the threshold t may take a value close to 0.
  • the spectral response value of the light source is calculated according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
  • the purpose of finding the region where the deta value is close to 0 in the deta image is to find the region where the three values of R value, G value and B value are all close to each other.
  • the three values of R value, G value, and B value are close, it may be a white area, or it may be a gray area with different gray levels. Since the reflectance of the white area and/or the gray area is a straight line, and the spectral curve of the incident light source is consistent with the spectral curve of the reflected light source, there is only a difference in brightness. Therefore, the spectrum of the white area and/or the gray area can more accurately reflect the spectrum of the light source.
  • histogram statistics are performed on the deta image, that is, the distribution of deta data in the deta image is counted by using the histogram, and the threshold t is determined according to the histogram statistics result of the deta image. Specifically, after the histogram statistics are performed on the deta image, the threshold t is determined according to the interval parameter of the minimum value interval in the histogram statistics results.
  • the interval parameters include, but are not limited to, one or more of the number of pixels, the proportion of pixels, and interval boundary values.
  • the statistical process of performing histogram statistics on a grayscale image is as follows: first find the minimum value M0 and maximum value M10 of the grayscale value (or deta value); then combine the minimum value M0 and The maximum value M10 is divided into 10 ranges (or called numerical intervals), and the 10 numerical intervals from small to large are: [M0, M1), [M1, M2), [M2, M3), [M3, M4 ), [M4, M5), [M5, M6), [M6, M7), [M7, M8), [M8, M9), [M9, M10], where, M0, M1, M2, M3, M4, M5, M6, M7, M8, M9, and M10 can be called interval values M.
  • the proportion h is h1.
  • the proportion h of pixels from the second numerical interval to the tenth numerical interval is obtained as follows: h2, h3, h4, h5, h6, h7, h8, h9, h10.
  • the t value corresponding to each numerical interval is different, which is related to the interval value M and h value of the numerical interval. In this embodiment, it is only necessary to find the t value of the first value interval, that is, to determine the t value that makes deta close to 0.
  • Determine the first numerical interval that is, the interval parameter of the minimum numerical interval, specifically, count the number of pixels whose deta value is greater than or equal to M0 and less than M1, that is, the number of pixels in the smallest numerical interval, which accounts for the total number of pixels
  • the proportion of is h1, that is, the proportion of pixels in the first value range is h1.
  • determine the preset value t according to M0, M1 and h1. For example, t M0+(M1-M0)*h1. This determines the value of t that brings deta close to 0.
  • the threshold t After the threshold t is determined, count the target area with deta ⁇ t, that is, find the target area with a deta value close to 0 in the grayscale image, and calculate the average of each channel in the nine channels of each pixel corresponding to the target area in the multispectral image value.
  • the averaged multispectral data of the target area is the approximate light source spectrum.
  • the target area of deta ⁇ t in the grayscale image includes N pixels, and N is a positive integer, and the multispectral response values of the nine channels corresponding to the N pixels in the multispectral image of the target area are obtained, and for each of the nine channels channels, calculate the average value of the multi-spectral response values of N pixels, and use the average value as the light source spectral response value.
  • Each of the N pixels corresponds to a multispectral response value of nine channels, so the average value is nine values corresponding to nine channels.
  • the number of divisions of the numerical intervals during the histogram statistics can be empirical values, for example, can be obtained based on the experience of existing shooting data. The more intervals are divided, the deta value of the obtained target area is closer to 0, and the obtained light source spectrum will be more accurate in theory, but when the interval is divided enough that the target area with a deta value close to 0 only includes a few pixels, It may be that the noise of the obtained light source spectrum is too large due to excessive noise. Therefore, the number of interval divisions needs to be considered in a compromise, neither too large nor too small. This application does not specifically limit this.
  • the multiple numerical intervals divided during the histogram statistics may include one or more of left-open and right-close intervals, left-close and right-open intervals, left-open and right-open intervals, left-close and right-close intervals, etc. combination of species. This application does not specifically limit this.
  • the multispectral response value of each pixel in the multispectral image is determined according to step S110, and the spectral response value of the light source is determined according to step S140. Therefore, in step S150, each pixel in the multispectral image is The multispectral reflectance image is obtained by dividing the multispectral response value of the pixel by the spectral response value of the light source.
  • the nine-channel multispectral response values of a certain pixel in the multispectral image are x1, x2, x3, x4, x5, x6, x7, x8, and x9.
  • the light source spectral response value that is, the average value of the multi-spectral response value of the nine channels is y1, y2, y3, y4, y5, y6, y7, y8, y9.
  • This embodiment borrows the advantage that the multispectral image can restore the RGB image, and finds the white or gray area from the restored RGB image. Since the spectrum of the white or gray area in the multispectral image is the spectrum closest to the light source, this solution adds In the step of area selection, the average spectrum of the area is used as the approximate spectrum of the light source.
  • the estimated light source spectrum has high accuracy and can be applied to scenes using different light sources.
  • the multispectral reflectance image calculated based on the light source spectrum is also more accurate. .
  • Fig. 3 is a schematic flowchart of an implementation process of a multi-spectral reflectance image acquisition method provided by another embodiment of the present application, and the multi-spectral reflectance image acquisition method in this embodiment can be executed by an electronic device.
  • the method for acquiring a multi-spectral reflectance image may include steps S210 to S250. It should be understood that for the similarities between the second embodiment and the first embodiment, please refer to the description of the foregoing method for details, and details are not repeated here.
  • S210 Acquire a multispectral image, and determine a multispectral response value of each pixel in the multispectral image.
  • the RGB image is reconstructed from the multispectral image, so the RGB image and the multispectral image have the same viewing angle.
  • the RGB image of the same scene is acquired by another camera, that is, a color camera. Therefore, the RGB image is different from the multispectral image acquired by the multispectral camera, and a matching operation is required.
  • the pixel points in the RGB image and the pixel points in the multispectral image are one-to-one, for example, an object in the RGB image is associated with the pixel points of the object in the multispectral image.
  • the gray-white area is found through the RGB image
  • the gray-white area in the multispectral image is found through the corresponding relationship, and the average value of the multi-channel response of this area is calculated as the approximate light source spectral response value.
  • the color camera and the multispectral camera are arranged adjacent to each other.
  • the RGB image and the multispectral image have more advantages during the matching process. There are more corresponding pixels, so that the accuracy of the light source spectrum estimation result can be increased.
  • the gray value corresponding to each pixel in the gray image is calculated according to the multi-channel value of the pixel in the matched RGB image.
  • S240 Determine a target area in the grayscale image whose grayscale value is smaller than a threshold, and calculate a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
  • step S120 is different from S220, and other steps are the same or similar.
  • the RGB image is obtained by reconstructing the multispectral image, so the RGB image and the multispectral image are obtained by the same camera, and both have the same viewing angle. Therefore, the accuracy of the spectrum of the light source estimated in the first embodiment is higher than that in the first embodiment Two is higher.
  • live body detection or application in other models can have better robustness when applied under different light sources. For example, based on the multispectral reflectance image for live detection, the analysis results do not change with the change of the light source, which has good robustness. Next, a liveness detection method is introduced.
  • the spectral characteristics of real human skin and prostheses are quite different in several characteristic bands, the spectral characteristics of the skin itself can be applied to eliminate most of the The prosthesis is enough to meet the precision needs of ordinary products.
  • the characteristics of real human skin include: in the 420-440nm (unit: nanometer) band, skin-specific melanin absorption; in the 550-590nm band, skin-specific hemoglobin absorption; in the 960-980nm band, skin-specific water absorption; band, weaker skin absorption (i.e. higher reflection), etc.
  • this band ratio method can be used as the first step in multispectral biopsy to exclude most prostheses. When encountering particularly high-precision prostheses, machine learning or depth Learn a model with higher accuracy to judge. The calculation process of this band is simpler than that of the method, and it is less affected by factors such as ambient light and dark noise.
  • Fig. 4 is a schematic flow chart of a living body detection method provided by another embodiment of the present application.
  • the living body detection method in this embodiment can be executed by an electronic device.
  • the living body detection method may include steps S310 to S340.
  • S310 Acquire a multispectral image including human skin, where the multispectral image includes at least one pixel.
  • the human skin includes but is not limited to the skin of a certain part or a certain area not covered in the human body, such as the skin of a human face, the skin of a certain area of the human face, or the skin of fingers.
  • a multispectral image containing human skin is acquired by a multispectral camera.
  • the multispectral image includes at least one pixel. It should be noted that the at least one pixel is a pixel for imaging human skin.
  • the multispectral image determine the first multispectral response value Dw1 of at least one pixel in the first characteristic band, and the second multispectral response value Dw2 in the second characteristic band.
  • the multi-spectral image includes multi-channel multi-spectral response values of each pixel.
  • the number of channels and bands are not limited, while in Embodiment 3, the multi-channels include at least two channels, the first characteristic band and the second characteristic band, and the number and bands of other channels are not limited. That is to say, in the third embodiment, the number of channels of the multispectral camera is at least two, including at least two channels of the first characteristic waveband and the second characteristic waveband.
  • the multispectral image includes multispectral response values of at least two channels for each pixel, that is, includes a first multispectral response value Dw1 of a first characteristic band, and a second multispectral response value Dw2 of a second characteristic band. Therefore, the first multispectral response value Dw1 of the first characteristic band and the second multispectral response value Dw2 of the second characteristic band of at least one pixel corresponding to the human skin can be determined according to the multispectral image.
  • two representative wavebands may be selected according to the reflection spectrum characteristics of real human skin, that is, the first characteristic waveband w1 and the second characteristic waveband w2.
  • the first characteristic waveband w1 is selected as a unique absorption peak waveband of real human skin, and there is a large difference in reflectance between the prosthesis and real human skin in this waveband.
  • the 420-440nm band or a certain band within this band which is the unique melanin absorption band of real human skin
  • another example is the 550-590nm band or a certain band within this band, which is the unique hemoglobin of real human skin Absorption band
  • another example is the 960 to 980nm band or a certain band within this band, which is the unique moisture absorption band of real human skin.
  • the second characteristic waveband w2 is selected as a non-absorption peak waveband of real human skin, that is, a waveband with weak absorption (or high reflection) of real human skin, such as the 800 to 850nm waveband or a certain frequency within this waveband. bands.
  • the first light source spectral response value Sw1 of the first characteristic band and the second light source spectral response value Sw2 of the second characteristic band are acquired according to the multispectral image.
  • the first light source spectral response value Sw1 in the first characteristic band of the multispectral image and the second light source spectral response value Sw2 in the second characteristic band can be acquired using existing technologies.
  • the method for obtaining the spectral response value of the light source described in Embodiment 1 and Embodiment 2 can be used to obtain the first light source spectral response value Sw1 in the first characteristic band, and in the second characteristic The spectral response value Sw2 of the second light source in the band.
  • the method for obtaining the spectral response value of the light source described in Embodiment 1 and Embodiment 2 can be used to obtain the first light source spectral response value Sw1 in the first characteristic band, and in the second characteristic The spectral response value Sw2 of the second light source in the band.
  • the RGB image corresponding to the multispectral image is obtained, the RGB image can be reconstructed according to the multispectral image (see implementation one), and the RGB image can also be taken for the same scene when the multispectral image is taken (see embodiment two); Then, convert the RGB image into a grayscale image; then determine the target area in the grayscale image whose gray value is less than the threshold; finally, calculate the average value of the multispectral response value of the channel of the first characteristic band in the multi-channel of the target area , that is, the first light source spectral response value Sw1; calculate the average value of the multi-spectral response values of the channel of the second characteristic band among the multi-channels in the target area, that is, the second light source spectral response value Sw2.
  • the first light source spectral response value Sw1 and the second The light source spectral response value Sw2 is more accurate, so that the accuracy of the subsequent living body detection results can be improved.
  • the methods for estimating light source spectra in Embodiment 1 and Embodiment 2 are applicable to application scenarios of different light sources, the living body detection scheme may have better robustness when applied under different light sources.
  • the multispectral response value of at least one pixel in the first characteristic band w1 is Dw1, and the estimated response value of the first characteristic band w1 of the light source spectrum is the first light source spectral response value Sw1; at least one pixel is in the second characteristic band w2
  • the multi-spectral response value of Dw2, the estimated response value of the second characteristic band w2 of the light source spectrum is the second light source spectral response value Sw2.
  • the comparison condition is adjusted according to the actual accuracy requirements of the living body detection, for example, when the product Rw is equal to the threshold k, the corresponding living body detection result can be set as: it is determined that the human body is alive. This application does not specifically limit this.
  • a step of determining the threshold k is also included before step S340 .
  • the process of determining the threshold k includes: obtaining the first sample reflectance R1 and the second sample reflectance R2 of multiple real skin samples in the first characteristic waveband and the second characteristic waveband, and calculating the The first sample reflectance ratio of , ie, R1/R2, determines the maximum value a of the first sample reflectance ratio among the plurality of real skin samples.
  • the third sample reflectance R3 and the fourth sample reflectance R4 of a plurality of different types of prosthetic samples in the first characteristic band and the second characteristic band are obtained, and the second sample reflectance ratio R3 of the plurality of prosthetic samples is calculated /R4, determining the minimum value b of the reflectance ratio of the second sample among the plurality of phantom samples.
  • the threshold k is determined according to the maximum value a and the minimum value b.
  • M is an integer greater than 1
  • different real skin samples are collected by a spectrometer, each of the first sample reflectance R1 in the first characteristic waveband, and the second sample reflectance R1 in the second characteristic waveband Reflectance R2, calculate the first sample reflectance ratio of each real skin sample in the M real skin samples, ie R1/R2, find the maximum value a of the first sample reflectance ratio of the M real skin samples.
  • N is an integer greater than 1
  • different types of prosthetic samples are collected by a spectrometer, and the third sample reflectance R3 in the first characteristic band, and the third sample reflectance R3 in the second characteristic band
  • calculate the second sample reflectance ratio of each of the N fake samples that is, R3/R4
  • the threshold k is greater than or equal to the smaller value of the two values a and b, and the threshold k is less than or equal to the mean of the two values a and b.
  • the specific value of the threshold k can be determined according to the requirements of the actual application. In this embodiment, a simple design of the threshold k can distinguish more living bodies and prostheses.
  • FIG. 5 is a schematic flow chart of an implementation of a living body detection method provided by another embodiment of the present application.
  • the living body detection method in this embodiment can be executed by an electronic device.
  • the living body detection method may include steps S410 to S440. It should be understood that, for the similarities between Embodiment 4 and Embodiment 3, please refer to the description of Embodiment 3 above for details, and details are not repeated here.
  • the living body detection model is a trained detection model for judging whether the human body to be tested is a living body.
  • the first ratio Rw1, the second ratio Rw2 and the third ratio Rw are input into the living body detection model, and the model can output a classification result of whether the human body to be detected is a living body or a prosthesis.
  • the living body detection model may include a machine learning or deep learning model.
  • a machine learning or deep learning model For example, support vector machine models, neural network models, Bayesian classifiers, or models such as random forests.
  • the present application does not specifically limit the living body detection model.
  • the living body detection model may include a binary classification model, and the binary classification results of the binary classification model include whether the human body to be detected is a living body or the human body to be detected is a prosthesis. For example, if [Rw1, Rw2, Rw1/Rw2] is input into the living body detection model, the output of the model is 1, indicating that the human body to be tested is a living body; the output is 0, indicating that the human body to be tested is a prosthesis.
  • the living body detection model may include a multi-classification living body detection model.
  • the living body detection model may classify living bodies and/or prostheses in a more fine-grained manner.
  • the prosthesis is further subdivided to distinguish different types or categories of the prosthesis (eg, different types or types of prosthesis correspond to different materials of the prosthesis).
  • the present application does not specifically limit the number of classifications of the living body detection model.
  • the process of obtaining a trained liveness detection model includes: obtaining first sample vectors and corresponding labels of multiple real skin samples, the first sample vectors including the real skin samples in the first feature band The three characteristics of the first sample reflectance value, the second sample reflectance value in the second characteristic band, and the ratio of the first sample reflectance value to the second sample reflectance value; obtain multiple different types of prostheses
  • the second sample vector of the sample and the corresponding label, the second sample vector includes the third sample reflectance value of the dummy sample in the first characteristic band, the fourth sample reflectance value in the second characteristic band, and the third sample reflectance value
  • the trained live body detection model can realize the classification of living body and prosthesis, that is to say, the trained live body detection model can be used to identify whether the human body to be tested is a living body. It should be understood that, as a non-limiting example, obtaining the first sample reflectance value, the second sample reflectance value, the ratio of the first sample reflectance value to the second sample reflectance value, the third sample reflectance value, For the process of the fourth sample reflectance value and the ratio of the third sample reflectance value to the fourth sample reflectance value, refer to the related description of determining the threshold k.
  • the band ratio Rw1/Rw2 is added to the reflectivity features of the two feature bands to form a three-dimensional feature combination vector, namely [Rw1, Rw2, Rw1/Rw2], which increases the dimension of the feature.
  • the feature combination vector is input into the living body detection model to output the living body detection result, which is determined by the three features in [Rw1, Rw2, Rw1/Rw2], and more accurate results can be obtained.
  • An embodiment of the present application also provides a light source spectrum acquisition device.
  • a light source spectrum acquisition device For details not described in the light source spectrum acquisition device, please refer to the description of the methods in the foregoing embodiments.
  • FIG. 6 is a schematic block diagram of a light source spectrum acquisition device provided by an embodiment of the present application.
  • the light source spectrum acquisition device includes: an acquisition module 81 , a reconstruction module 82 , a conversion module 83 and a first calculation module 84 .
  • the acquiring module 81 is configured to acquire a multispectral image, and determine the multispectral response value of each pixel in the multispectral image;
  • Reconstruction module 82 for reconstructing RGB image according to described multispectral image
  • a conversion module 83 configured to convert the RGB image into a grayscale image
  • the first calculation module 84 is configured to determine the target area whose gray value is less than the threshold or the gray value is less than or equal to the threshold in the gray image, according to the number of pixels corresponding to the target area in the multispectral image Spectral Response Calculates the spectral response of the light source.
  • the gray value corresponding to each pixel in the gray image is calculated according to the three-channel value of the pixel in the RGB image.
  • R, G and B represent RGB
  • abs represents the absolute value function.
  • the apparatus for acquiring the light source spectrum further includes: a threshold determination module 85 .
  • the threshold determination module 85 is configured to perform histogram statistics on the grayscale image, and determine the threshold according to the interval parameter of the minimum value interval in the histogram statistics results.
  • the threshold determination module 85 is specifically configured to:
  • the threshold is determined according to the interval boundary value and pixel ratio of the minimum value interval in the histogram statistical results.
  • the first calculation module 84 is specifically used for:
  • An embodiment of the present application also provides a multi-spectral reflectance image acquisition device.
  • a multi-spectral reflectance image acquisition device For details not described in the multi-spectral reflectance image acquisition device, please refer to the description of the method in the first embodiment above.
  • FIG. 8 is a schematic block diagram of a multi-spectral reflectance image acquisition device provided in an embodiment of the present application.
  • the multispectral reflectance image acquisition device includes: an acquisition module 101 , a reconstruction module 102 , a conversion module 103 , a first calculation module 104 and a second calculation module 105 .
  • An acquisition module 101 configured to acquire a multispectral image, and determine the multispectral response value of each pixel in the multispectral image;
  • a reconstruction module 102 configured to reconstruct an RGB image according to the multispectral image
  • a conversion module 103 configured to convert the RGB image into a grayscale image
  • the first calculation module 104 is configured to determine the target area in the grayscale image whose grayscale value is less than the threshold or whose grayscale value is less than or equal to the threshold, according to the number of pixels corresponding to the target area in the multispectral image
  • the spectral response value calculates the spectral response value of the light source
  • the second calculation module 105 is configured to acquire a multispectral reflectance image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
  • the gray value corresponding to each pixel in the gray image is calculated according to the three-channel value of the pixel in the RGB image.
  • the multispectral reflectance image acquisition device further includes: a threshold determination module 106 .
  • the threshold determination module 106 is configured to perform histogram statistics on the grayscale image, and determine the threshold according to the interval parameter of the minimum value interval in the histogram statistics results.
  • the threshold determination module 106 is specifically configured to:
  • the threshold is determined according to the interval boundary value and pixel ratio of the minimum value interval in the histogram statistical results.
  • the first calculation module 104 is specifically used for:
  • the second calculation module 105 is specifically used for:
  • the electronic device may include one or more processors 120 (only one is shown in FIG. 10), a memory 121 and a A computer program 122 running on one or more processors 120, for example, a program for acquiring light source spectra and/or multispectral reflectance images.
  • processors 120 execute the computer program 122
  • various steps in the embodiment of the light source spectrum acquisition method and/or the multi-spectral reflectance image acquisition method may be implemented.
  • processors 120 execute the computer program 122, they can realize the functions of each module/unit in the embodiment of the light source spectrum acquisition device and/or the multispectral reflectance image acquisition device, which is not limited here.
  • FIG. 10 is only an example of an electronic device, and does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than those shown in the figure, or some components may be combined, or different components.
  • the electronic device may also include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 120 may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the storage 121 may be an internal storage unit of the electronic device, such as a hard disk or memory of the electronic device.
  • the memory 121 can also be an external storage device of the electronic device, such as a plug-in hard disk equipped on the electronic device, a smart memory card (smart media card, SMC), a secure digital (secure digital, SD) card, a flash memory card (flash card) Wait.
  • the memory 121 may also include both an internal storage unit of the electronic device and an external storage device.
  • the memory 121 is used to store computer programs and other programs and data required by the electronic device.
  • the memory 121 can also be used to temporarily store data that has been output or will be output.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it can realize the embodiment of the light source spectrum acquisition method and/or the multi-spectrum The steps in the embodiment of the reflectance image acquisition method.
  • An embodiment of the present application provides a computer program product.
  • the computer program product runs on an electronic device, the electronic device can realize the steps in the embodiment of the light source spectrum acquisition method and/or the embodiment of the multispectral reflectance image acquisition method.
  • the disclosed device/electronic equipment and method can be implemented in other ways.
  • the device/electronic device embodiments described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other division methods in actual implementation, such as multiple units or components May be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or in the form of software functional units.
  • an integrated module/unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the present invention realizes all or part of the processes in the methods of the above embodiments, and can also be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying computer program code, recording medium, USB flash drive, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM), random access Memory (random access memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in computer readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to legislation and patent practice, computer readable media does not include Electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Sustainable Development (AREA)
  • Image Processing (AREA)
  • Spectrometry And Color Measurement (AREA)

Abstract

一种光源光谱和多光谱反射率图像获取方法、装置及电子设备,该光源光谱获取方法包括:获取多光谱图像,确定多光谱图像中每个像素的多光谱响应值(S110);根据多光谱图像重构RGB图像(S120);将RGB图像转换成灰度图像(S130);确定灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据目标区域在多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值(S140),可以提高环境光光谱的预估精度。

Description

光源光谱和多光谱反射率图像获取方法、装置及电子设备
本申请要求于2021年5月26日提交中国专利局,申请号为202110578372.2,发明名称为“光源光谱和多光谱反射率图像获取方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及多光谱检测技术领域,特别涉及一种光源光谱获取方法、装置及电子设备,以及多光谱反射率图像获取方法、装置及电子设备。
背景技术
多光谱成像和多光谱分析的技术,既能获取空间图像维度的信息,又能获取光谱维度的信息。多光谱成像原理就是把入射的光分成若干个窄波段的光,分别成像在多光谱的探测器上,从而获得不同光谱波段的图像,形成多光谱三维数据。多光谱数据处理方法是从图像维度提取纹理,如局部二值模式(local binary pattern,LBP)纹理提取、灰度共生矩阵纹理提取、方向梯度直方图(histogram of oriented gradient,HOG)纹理提取等;从光谱维度提取物质成分和颜色相关的特征。最后将图像维度信息和光谱维度信息进行融合,来分析目标对象。
如果拍摄多光谱图像所处的环境光的光谱已知,将多光谱图像各个像素的响应值光谱除以环境光的光谱,即可以得到多光谱的反射率图像,该图像和被拍摄的物体的性质相关,不随着光源变化。环境光的光谱即光源光谱,指入射到多光谱拍摄对象的表面的光源光谱。可见,获取精度较高的环境光光谱对获取高精度的多光谱反射率图像至关重要。因此,如何获取精度较高的环境光的光谱是亟待解决的问题。
发明内容
本申请实施例提供了一种光源光谱获取方法、装置及电子设备,以及多光谱反射率图像获取方法、装置及电子设备,能够获得精度较高的环境光的光谱。
第一方面,本申请一实施例提供了一种光源光谱获取方法,包括:
获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
根据所述多光谱图像重构RGB图像;
将所述RGB图像转换成灰度图像;
确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
本实施例将灰度值小于阈值,或灰度值小于或等于阈值的目标区域找出来,基于目标区域计算光源光谱响应值,可以提高获取到的光源光谱的准确度。
作为第一方面的一种实现方式,所述灰度图像中各像素对应的灰度值根据该像素在所述RGB图像中的三通道数值计算得到。
作为该实现方式的一示例,根据公式deta=abs(1-G/B)+abs(1-R/B)计算各像素对应的灰度值,其中,R、G和B表示RGB图像中各像素的三通道数值,即R值、G值和B值,abs表示绝对值函数。
作为第一方面的一种实现方式,所述将所述RGB图像转换成灰度图像之后,还包括:
根据所述灰度图像确定阈值。
作为一实现方式,根据所述灰度图像确定阈值包括:对所述灰度图像进行直方图统计,根据直方图统计结果中最小数值区间的区间参数确定阈值。
作为一种实现方式,所述根据直方图统计结果中最小数值区间的区间参数确定阈值,包括:
根据直方图统计结果中最小数值区间的区间边界数值和像素占比确定阈值。
作为第一方面的一种实现方式,所述根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值,包括:
计算所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值的平均值,得到光源光谱响应值。
第二方面,本申请一实施例提供了一种多光谱反射率图像获取方法,包括:
获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
根据所述多光谱图像重构RGB图像;
将所述RGB图像转换成灰度图像;
确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值;
根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
本实施例将灰度值小于阈值,或灰度值小于或等于阈值的目标区域找出来,基于目标区域计算光源光谱响应值,可以提高获取到的光源光谱的准确度,从而提高多光谱反射率图像的精度。
作为第一方面的一种实现方式,所述灰度图像中各像素对应的灰度值根据该像素在所述RGB图像中的三通道数值计算得到。
作为该实现方式的一示例,根据公式deta=abs(1-G/B)+abs(1-R/B)计算各像素对应的灰度值,其中,R、G和B表示RGB图像中各像素的三通道数值,即R值、G值和B值,abs表示绝对值函数。
作为第一方面的一种实现方式,所述将所述RGB图像转换成灰度图像之后,还包括:
根据所述灰度图像确定阈值。
作为一实现方式,根据所述灰度图像确定阈值包括:对所述灰度图像进行直方图统计,根据直方图统计结果中最小数值区间的区间参数确定阈值。
作为一种实现方式,所述根据直方图统计结果中最小数值区间的区间参数确定阈值,包括:
根据直方图统计结果中最小数值区间的区间边界数值和像素占比确定阈值。
作为第一方面的一种实现方式,所述根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值,包括:
计算所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值的平均值,得到光源光谱响应值。
作为第一方面的一种实现方式,所述根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像,包括:
将所述多光谱图像中每个像素的多光谱响应值除以所述光源光谱响应值,获得多光谱反射率图像。
第三方面,本申请一实施例提供了一种光源光谱获取装置,包括:
获取模块,用于获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
重构模块,用于根据所述多光谱图像重构RGB图像;
转换模块,用于将所述RGB图像转换成灰度图像;
第一计算模块,用于确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
第四方面,本申请一实施例提供了一种多光谱反射率图像获取装置,包括:
获取模块,用于获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
重构模块,用于根据所述多光谱图像重构RGB图像;
转换模块,用于将所述RGB图像转换成灰度图像;
第一计算模块,用于确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的 多光谱响应值计算光源光谱响应值;
第二计算模块,用于根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
第五方面,本申请实施例提供了一种电子设备,包括:存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面或第一方面任一实现方式所述的光源光谱获取方法,和/或,实现如第二方面或第二方面任一实现方式所述的多光谱反射率图像获取方法。
第六方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面或第一方面任一实现方式所述的光源光谱获取方法,和/或,实现如第二方面或第二方面任一实现方式所述的多光谱反射率图像获取方法。
第七方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如第一方面或第一方面任一实现方式所述的光源光谱获取方法,和/或,实现如第二方面或第二方面任一实现方式所述的多光谱反射率图像获取方法。
可以理解的是,上述第三方面至第七方面的有益效果可以参见上述第一方面或第二方面中的相关描述,在此不再赘述。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的一种多光谱反射率图像获取方法的实现流程示意图;
图2是本申请一实施例提供的一种对灰度图像进行直方图统计的统计结果示意图;
图3是本申请一实施例提供的另一种多光谱反射率图像获取方法的实现流程示意图;
图4是本申请一实施例提供的一种活体检测方法的实现流程示意图;
图5是本申请一实施例提供的另一种活体检测方法的实现流程示意图;
图6是本申请一实施例提供的一种光源光谱获取装置的示意图;
图7是本申请一实施例提供的另一种光源光谱获取装置的示意图;
图8是本申请一实施例提供的一种多光谱反射率图像获取装置的示意图;
图9是本申请一实施例提供的另一种多光谱反射率图像获取装置的示意图;
图10是本申请一实施例提供的一种电子设备的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本发明实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本发明。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本发明的描述。
在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
在本申请说明书中描述的“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形 都意味着“包括但不限于”,除非是以其他方式另外特别强调。
此外,在本申请的描述中,“多个”的含义是两个或两个以上。术语“第一”、“第二”、“第三”和“第四”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
光源估计方法一般有如下两种:第一种,基于白世界的光源光谱估计方法。该方法找到多光谱图像最亮的区域,求取其平均光谱作为光源光谱。该方法对最亮区域是白色区域时,还原效果较好。第二种,基于灰世界的光源光谱估计方法。该方法求取整个多光谱图像的平均光谱作为光源光谱。该方法对颜色丰富的场景,还原效果较好。
这两种方法都是基于对整个多光谱图像的模糊预估,来估计光源光谱。例如,基于白世界的光源光谱预估,取多光谱图像中最亮的区域作为光源光谱,如果最亮的区域不是白色,则预估的误差较大。又如,基于灰世界的光源光谱预估,取多光谱图像中所有像素的平均值作为光源光谱,如果图像中白色区域很少且有大面积单一色时,则预估的误差较大。
以上两种方法在使用不同光源的应用场景下,适应性不强,误差较大。为了解决如何更准确预估环境光或光源光谱(或称为环境光或光源近似光谱)的技术问题,本申请实施例提供了一种获取多光谱反射率图像的方法,可以通过获取多光谱图像,在该多光谱图像中定位光源区域,从而根据该光源区域的多光谱信息确定光源光谱。
实施例一
图1是本申请一实施例提供的一种多光谱反射率图像获取方法的实现流程示意图,本实施例中的多光谱反射率图像获取方法可由电子设备执行。电子设备包括但不限于计算机、平板电脑、服务器、手机或多光谱相机等。服务器包括但不限于独立服务器或云服务器等。本实施例中的多光谱反射率图像获取方法适用于需要预估当前环境中光源光谱(或光源近似光谱)的情形。如图1所 示,多光谱反射率图像获取方法可以包括步骤S110至步骤S150。
S110,获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值。
其中,多光谱图像为单张多光谱图像。通过多光谱相机采集任一场景(场景中存在环境光或光源)的多光谱图像。单张多光谱图像包含的信息包括各个像素的响应值信息,响应值信息代表反射到多光谱相机的光在多光谱相机上的响应。该响应值信息是随着光源的强度、光源光谱的形状、光源的打光方向而变化的。
多光谱相机的通道数量可以为几个至十几个,例如八通道、九通道或十六通道等。本实施例对多光谱相机的通道数量以及各通道的波段不作具体限定。为了更好的理解本实施例,后续以九通道的多光谱相机作为多光谱相机的示例,应理解,示例性描述不能解释为对本实施例的具体限制。
作为一非限制性示例,多光谱相机为九通道的多光谱相机,九通道的多光谱相机的每个像素能获得x1,x2,x3,x4,x5,x6,x7,x8,x9这九个响应值。也就是说,每个像素的多光谱响应值为对应九个通道的九个响应值。其中,x1代表具有q1响应曲线特性的第一通道的响应值;x2代表具有q2响应曲线特性的第二通道的响应值;x3代表具有q3响应曲线特性的第三通道的响应值;......;x9代表具有q9响应曲线特性的第三通道的响应值。也就是说,xi代表具有qi响应曲线特性的第i通道的响应值,i取值为1至9的整数。
S120,根据所述多光谱图像重构红绿蓝(red green blue,RGB)图像。
RGB图像中每个像素有三个通道的响应值,即R通道的R值、G通道的G值和B通道的B值。根据多光谱图像重构RGB图像,就是根据多光谱图像中每个像素的多光谱响应值计算该像素的R值、G值和B值。
作为一实现方式,步骤S120,根据所述多光谱图像重构RGB图像,包括如下步骤S121至S124。
S121,获取多光谱相机的九个通道的量子效率(quantum efficiency,QE)响应曲线。
具体地,获取多光谱相机的九个通道的QE响应曲线矩阵,QE响应曲线矩阵可以记为q1,q2,q3,q4,q5,q6,q7,q8,q9。其中,矩阵q1为第一通道的响应曲线,矩阵q2为第二通道的响应曲线,......,矩阵q9为第九通道的响应曲线。也就是说,矩阵qj为第j通道的响应曲线,j取值为1至9的整数。
需要说明的是,针对一款固定的多光谱相机(或多光谱硬件),这些响应曲线可以通过测试得到。测试得到这些曲线后,可以预先存储在电子设备的存储器中,在需要时调用即可。
S122,获取三刺激值曲线,即r曲线,g曲线和b曲线。
获取真实三原色表色系统(CIE 1931RGB系统)的光谱三刺激值曲线,包括r曲线,g曲线和b曲线。需要说明的是,这些曲线为已知,可以从CIE标准查到。三个曲线预先存储在电子设备的存储器中,在需要时调用即可。
S123,利用九通道的QE响应曲线对三刺激值曲线进行线性拟合,得到拟合参数。
具体地,用线性拟合的方法,分别将r曲线,g曲线和b曲线用九通道的响应曲线,即q1,q2,q3,q4,q5,q6,q7,q8,q9曲线进行线性拟合。线性拟合的公式如下:
r=a1*q1+a2*q2+a3*q3+a4*q4+a5*q5+a6*q6+a7*q7+a8*q8+a9*q9;
g=b1*q1+b2*q2+b3*q3+b4*q4+b5*q5+b6*q6+b7*q7+b8*q8+b9*q9;
b=c1*q1+c2*q2+c3*q3+c4*q4+c5*q5+c6*q6+c7*q7+c8*q8+c9*q9。
用偏最小二乘法解以上方程的解,求得拟合参数的值,即以下各参数的值:
a1,a2,a3,a4,a5,a6,a7,a8,a9;
b1,b2,b3,b4,b5,b6,b7,b8,b9;
c1,c2,c3,c4,c5,c6,c7,c8,c9。
S124,根据所述拟合参数和每个像素的所述多光谱响应值进行拟合计算,得到每个像素的R值、G值和B值。
具体地,根据步骤S110确定多光谱图像中某一像素的九通道响应值为:x1, x2,x3,x4,x5,x6,x7,x8,x9,根据步骤S123计算得到拟合参数,在步骤S124中根据拟合参数和该像素的九通道响应值进行拟合计算,得到该像素的R值、G值和B值。公式如下:
R=a1*x1+a2*x2+a3*x3+a4*x4+a5*x5+a6*x6+a7*x7+a8*x8+a9*x9;
G=b1*x1+b2*x2+b3*x3+b4*x4+b5*x5+b6*x6+b7*x7+b8*x8+b9*x9;
B=c1*x1+c2*x2+c3*x3+c4*x4+c5*x5+c6*x6+c7*x7+c8*x8+c9*x9。
经过拟合计算得到多光谱图像中每个像素的R值、G值和B值,就得出了整幅多光谱图像对应的RGB图像,即根据多光谱图像重构了RGB图像。
在其他一些实施例中,在重构RGB图像之后,还可以对RGB图像进行白平衡,得到经白平衡后的RGB图像,可以记为RGB_wb图像。在这些实施例中,在后续的步骤S130中,是将RGB_wb图像转换成灰度图像。
在一些实现方式中,可以直接借用现有的白平衡方法,例如灰世界、白世界或自动阈值等方法,对RGB图像做白平衡处理,得到经白平衡的RGB图像RGB_wb。借用白平衡这个步骤,可以使得在后续步骤S140中得到的deta值接近0的区域,与灰色或白色区域更好地对应上,可以更准确得到区域选择结果,从而得到更准确的光源光谱。
S130,将所述RGB图像转换成灰度图像。
其中,灰度图像可称为deta图像。灰度图像中各像素对应的灰度值根据该像素在所述RGB图像中的多通道数值计算得到。
根据RGB图像中每个像素的R、G和B三个通道的R值、G值和B值,计算该像素的灰度值(或deta值),根据各个像素的灰度值(或deta值)就得到与RGB图像对应的灰度图像(或deta图像)。也就是说,灰度图像(或deta图像)各像素对应的灰度值(或deta值)根据该像素在所述RGB图像中的多通道数值,即R值、G值和B值计算得到。
作为一非限制性示例,提取RGB图像的R、G和B三个通道,针对每一个像素,根据公式deta=abs(1-G/B)+abs(1-R/B)求取该像素对应的deta值,并将 该deta值作为灰度值赋值给灰度图像的该像素,根据各个像素的灰度值得到deta图像,其中,公式中的abs代表绝对值函数。
S140,确定所述灰度图像中灰度值小于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
具体地,确定所述灰度图像(或deta图像)中灰度值(或deta值)小于阈值的目标区域。其中,阈值t可以取接近0的某个数值。根据多光谱图像中对应该目标区域的各像素的多光谱响应值计算光源的光谱响应值。
在本实施例中,找到deta图像中deta值接近0的区域,是为了找到R值、G值、B值这三个值均接近的区域。当R值、G值、B值这三个值接近时,可能为白色区域,也可能为不同灰度的灰色区域。由于白色区域和/或灰色区域的反射率是一条直线,而入射的光源光谱曲线和反射的光源光谱曲线一致,仅仅是亮度上的差异。因而,白色区域和/或灰色区域的光谱能更加准确的反映光源光谱。
作为本实施例一实现方式,对deta图像进行直方图统计,即用直方图统计deta图像中deta数据的分布,根据deta图像的直方图统计结果确定阈值t。具体地,对deta图像进行直方图统计后,根据直方图统计结果中最小数值区间的区间参数确定阈值t。区间参数包括但不限于像素个数,像素占比,区间边界数值等中的一个或多个。
作为一非限制性示例,对灰度图像(或deta图像)进行直方图统计的统计过程如下:首先找到灰度值(或deta值)的最小值M0和最大值M10;然后将最小值M0和最大值M10之间划为10个范围(或称为数值区间),10个数值区间从小到大依次为:[M0,M1),[M1,M2),[M2,M3),[M3,M4),[M4,M5),[M5,M6),[M6,M7),[M7,M8),[M8,M9),[M9,M10],其中,M0,M1,M2,M3,M4,M5,M6,M7,M8,M9,M10可称为区间数值M。统计灰度值大于等于M0且小于M1的像素个数,即第一数值区间或最小数值区间的像素个数,该像素个数占总像素个数的比例为h1,即第一数值区间的像 素占比h为h1。以同样方法得到第二数值区间至第十数值区间的像素占比h依次为:h2,h3,h4,h5,h6,h7,h8,h9,h10。对deta图像进行直方图统计的统计结果的示意图如图2所示。针对第一数值区间或最小数值区间,t=M0+(M1-M0)*h1。每个数值区间对应的t值不一样,与该数值区间的区间数值M以及h值相关。在本实施例中,只需要找到第一数值区间的t值,即确定使deta接近0的t值。
作为另一非限制性示例,首先找到灰度值(或deta值)的最小值M0和最大值M10;然后将最小值M0和最大值M10之间划为10个范围(或称为数值区间)。确定第一个数值区间,即最小数值区间的区间参数,具体地,统计deta值大于等于M0且小于M1的像素个数,即最小数值区间的像素个数,该像素个数占总像素个数的比例为h1,即第一数值区间的像素占比为h1。最后,根据M0,M1和h1确定预设数值t。例如,t=M0+(M1-M0)*h1。这样就确定了使deta接近0的t值。
在阈值t确定以后,统计deta<t的目标区域,即找到灰度图像中deta值接近0的目标区域,计算该目标区域在多光谱图像中对应的各像素的九通道中每个通道的平均值。该目标区域的平均多光谱数据,即为近似的光源光谱。例如,灰度图像中deta<t的目标区域包括N个像素,N为正整数,获取目标区域在多光谱图像中对应的N个像素各自的九通道的多光谱响应值,针对九通道中每个通道,计算N个像素的多光谱响应值的平均值,平均值作为光源光谱响应值。N个像素中每个像素对应九通道的多光谱响应值,因而平均值是对应九个通道的九个数值。
在本实施例其他实现方式中,阈值t确定以后,统计deta<=t的目标区域。
需要说明的是,本实施例直方图统计时数值区间的划分数量可以取经验值,例如可以根据现有拍摄数据的经验获得。区间划分得越多,得到的目标区域的deta值更接近0,得到的光源光谱在理论上会越准确,但是当区间划分得足够多导致deta值接近0的目标区域只包括几个像素时,可能会由于噪声过大,反 而使得到的光源光谱噪声过大,因此区间的划分数量需要折中考虑,既不能太大,也不能太小。本申请对此不予具体限制。
还需要说明的是,直方图统计时划分好的多个数值区间,可以包括左开右闭区间,左闭右开区间,左开右开区间,左闭右闭区间等中的一种或多种的组合。本申请对此不予具体限制。
S150,根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
作为本实施例一实现方式,根据步骤S110确定了多光谱图像中每个像素的多光谱响应值,根据步骤S140确定了光源光谱响应值,因此,在步骤S150中,将多光谱图像中每个像素的多光谱响应值除以光源光谱响应值获得多光谱反射率图像。
作为一非限制性示例,多光谱图像中某一像素的九通道多光谱响应值为x1,x2,x3,x4,x5,x6,x7,x8,x9。光源光谱响应值,即九个通道的多光谱响应值的平均值为y1,y2,y3,y4,y5,y6,y7,y8,y9。计算x1/y1,x2/y2,x3/y3,x4/y4,x5/y5,x6/y6,x7/y7,x8/y8,x9/y9得到该像素的反射率,求出每个像素的反射率后,这样就得到了多光谱图像对应的多光谱反射率图。
本实施例借用多光谱图像可以还原RGB图像的优势,从还原的RGB图像中寻找到白色或灰色区域,由于多光谱图像中白色或灰色区域的光谱为最接近光源的光谱,本方案中增加了区域选择这一步骤,将该区域的平均光谱作为光源的近似光谱,预估的光源光谱精度较高,可适用于采用不同光源的场景,基于该光源光谱计算的多光谱反射率图像也更精准。
实施例二
图3是本申请另一实施例提供的一种多光谱反射率图像获取方法的实现流程示意图,本实施例中的多光谱反射率图像获取方法可由电子设备执行。如图3所示,多光谱反射率图像获取方法可以包括步骤S210至步骤S250。应理解,实施例二与实施例一的相同之处,请详见前述方法的描述,此处不再赘述。
S210,获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值。
S220,获取RGB图像,将RGB图像与多光谱图像进行匹配,获得匹配后的RGB图像。
在实施例一中RGB图像是根据多光谱图像重构获得的,因此RGB图像和多光谱图像具有相同视角。而在实施例二中,通过另一个相机,即彩色相机获取同一场景的RGB图像,因此RGB图像与通过多光谱相机获取的多光谱图像视角不同,需要进行匹配操作。
作为本实施例一实现方式,将RGB图像中的像素点和多光谱图像的像素点一一对应,例如将RGB图像中某物体和多光谱图像中该物体的像素点对应。当通过RGB图像找到灰色白色区域时,再通过对应关系找到多光谱图像中的灰色白色区域,计算该区域的多通道响应的平均值,作为近似的光源光谱响应值。
在本实施例中,彩色相机和多光谱相机相邻设置,两者位置越近,两者的接收端或成像端拍摄的视场越接近,这样在匹配过程中RGB图像和多光谱图像具备更多的对应像素点,从而可以增加光源光谱预估结果的精度。
S230,将匹配后的RGB图像转换成灰度图像。
其中,灰度图像中各像素对应的灰度值根据该像素在匹配后的RGB图像中的多通道数值计算得到。
S240,确定所述灰度图像中灰度值小于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
S250,根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
实施例二与实施例一的不同之处在于步骤S120与S220不同,其他步骤相同或相似。在实施例一中RGB图像是根据多光谱图像重构获得的,因而RGB图像和多光谱图像是由同一相机获得的,两者具有相同视角,因而实施例一预估的光源光谱精度较实施例二更高。
基于实施例一和实施例二的方法获得的多光谱反射率图像,进行活体检测 或应用在其他的模型,在不同光源下应用时均可以有更好的鲁棒性。例如,基于多光谱反射率图像做活体检测,其分析结果不随光源的变化而变化,具有较好的鲁棒性。接下来介绍一种活体检测方法。
由于真实人体皮肤和假体(例如假手指或假面具等)在几个特征波段的光谱特性有较大的区别,用本申请提供的波段比方法能应用皮肤本身的光谱特性,来排除掉大部分的假体,足够满足普通产品的精度需要。例如真实人体皮肤的特性包括:在420至440nm(单位:纳米)波段,皮肤特有黑色素吸收;在550至590nm波段,皮肤特有血红蛋白吸收;在960至980nm波段,皮肤特有水分吸收;在800至850nm波段,皮肤吸收较弱(即反射较高)等。对于精度要求较高的支付类消费场景,该波段比方法可以作为多光谱活检的第一步判断,将大部分假体排除掉,当遇到精度特别高的假体,再用机器学习或深度学习等精度更高的模型来判断。该波段比方法计算过程简单,且受环境光和暗噪声等因素的影响较小。
实施例三
图4是本申请另一实施例提供的一种活体检测方法的实现流程示意图,本实施例中的活体检测方法可由电子设备执行。如图4所示,活体检测方法可以包括步骤S310至步骤S340。
S310,获取包含人体皮肤的多光谱图像,所述多光谱图像包含至少一个像素。
其中,人体皮肤包括但不限于人体中未被覆盖的某个部位或某个区域的皮肤,例如人脸皮肤、或人脸的某个区域的皮肤、或手指的皮肤等。
通过多光谱相机获取包含人体皮肤的多光谱图像。所述多光谱图像至少包含一个像素。需要说明的是,该至少一个像素是针对人体皮肤成像的像素。
S320,确定所述至少一个像素分别在第一特征波段和第二特征波段的第一多光谱响应值Dw1及第二多光谱响应值Dw2。
其中,根据多光谱图像,确定至少一个像素在第一特征波段的第一多光谱 响应值Dw1,以及在第二特征波段的第二多光谱响应值Dw2。
根据实施一的描述可知,多光谱图像包括每个像素的多通道的多光谱响应值。在实施例一中,并不限定通道的数量和波段,而在实施例三中,多通道至少包括第一特征波段和第二特征波段这两个通道,并不限定其他通道的数量和波段。也就是说,在实施例三中,多光谱相机的通道数量至少为2个,至少包括第一特征波段和第二特征波段这两个通道。多光谱图像包括每个像素的至少两个通道的多光谱响应值,即包括第一特征波段的第一多光谱响应值Dw1,以及第二特征波段的第二多光谱响应值Dw2。因而,可以根据多光谱图像确定人体皮肤对应的至少一个像素的第一特征波段的第一多光谱响应值Dw1及第二特征波段的第二多光谱响应值Dw2。
在实施例三中,可以根据真实人体皮肤的反射光谱特性,选择两个有代表性的波段,即第一特征波段w1和第二特征波段w2。
在一些实现方式中,第一特征波段w1选择为真实人体皮肤特有的吸收峰波段,在该波段假体和真实人体皮肤的反射率有较大差异。例如420至440nm波段或在该波段内的某个波段,该波段为真实人体皮肤特有黑色素吸收波段;又如550至590nm波段或在该波段内的某个波段,该波段为真实人体皮肤特有血红蛋白吸收波段;再如960至980nm波段或在该波段内的某个波段,该波段为真实人体皮肤特有水分吸收波段。
在一些实现方式中,第二特征波段w2选择为真实人体皮肤的非吸收峰波段,即真实人体皮肤吸收较弱(或反射较高)的波段,例如800至850nm波段或在该波段内的某个波段。
S330,根据所述多光谱图像分别获取第一特征波段和第二特征波段的第一光源光谱响应值Sw1以及第二光源光谱响应值Sw2。
其中,根据多光谱图像获取第一特征波段的第一光源光谱响应值Sw1,以及第二特征波段的第二光源光谱响应值Sw2。
在实施例三的一些实现方式中,可以用现有技术获取多光谱图像在第一特 征波段的第一光源光谱响应值Sw1,以及在第二特征波段的第二光源光谱响应值Sw2。
在实施例三的其他一些实现方式中,可以利用实施例一和实施例二中描述的获取光源光谱响应值的方法,获取第一特征波段的第一光源光谱响应值Sw1,以及在第二特征波段的第二光源光谱响应值Sw2。此处未详细描述之处,请参见实施例一和实施例二的相关描述之处。
具体地,首先,获取多光谱图像对应的RGB图像,可以根据多光谱图像重构RGB图像(参见实施一),也可以在拍摄多光谱图像时针对相同场景拍摄RGB图像(参见实施例二);然后,将RGB图像转换成灰度图像;再确定灰度图像中灰度值小于阈值的目标区域;最后,计算目标区域的多通道中第一特征波段这一通道的多光谱响应值的平均值,即第一光源光谱响应值Sw1;计算目标区域的多通道中第二特征波段这一通道的多光谱响应值的平均值,即第二光源光谱响应值Sw2。
需要说明的是,一方面,由于实施例一和实施例二的方法预估的光源光谱更准确,因而基于实施例一和实施例二的相关描述得到的第一光源光谱响应值Sw1和第二光源光谱响应值Sw2更准确,从而可以提高后续活体检测结果的精度。另一方面,由于实施例一和实施例二中预估光源光谱的方法可适用于不同光源的应用场景,因此,活体检测方案在不同光源下应用时可以有更好的鲁棒性。
S340,计算Dw1/Dw2与Sw2/Sw1的乘积,并将乘积与阈值k比较,若乘积小于阈值k,则判断所述人体为活体。
至少一个像素点在第一特征波段w1的多光谱响应值为Dw1,光源光谱的第一特征波段w1的预估响应值为第一光源光谱响应值Sw1;至少一个像素点在第二特征波段w2的多光谱响应值为Dw2,光源光谱的第二特征波段w2的预估响应值为第二光源光谱响应值Sw2。
计算(Dw1/Dw2)*(Sw2/Sw1)得到乘积Rw,并将乘积Rw与阈值k进行比较, 根据比较结果获得活体检测结果。
作为一实现方式,首先,计算Dw1与Sw1的比值,即计算至少一个像素在第一特征波段的反射率值,可以记为Rw1,Rw1=Dw1/Sw1;计算Dw2与Sw2的比值,即计算至少一个像素在第二特征波段的反射率值,可以记为Rw2,Rw2=Dw2/Sw2。然后,计算Rw1与Rw2的比值,可以记为Rw,Rw=Rw1/Rw2=(Dw1/Dw2)*(Sw2/Sw1)。因此,本实现方式可以称作波段比的活体检测方法。
在一些实施例中,若乘积Rw小于阈值k,则判定人体为活体;若乘积Rw等于或大于阈值k,则判断人体为假体。在其他一些实施例中,根据活体检测的实际精度要求,调整比较条件,例如,当乘积Rw等于阈值k时,对应的活体检测结果可以设置为:判定人体为活体。本申请对此不予具体限制。
在图4所示实施例的基础上,在其他实施例中,步骤S340之前还包括确定阈值k的步骤。
作为一实现方式,确定阈值k的过程包括:获取多个真实皮肤样本在第一特征波段和第二特征波段的第一样本反射率R1和第二样本反射率R2,计算多个真实皮肤样本的第一样本反射率比值,即R1/R2,确定多个真实皮肤样本中第一样本反射率比值的最大值a。此外,获取多个不同种类的假体样本在第一特征波段和第二特征波段的第三样本反射率R3和第四样本反射率R4,计算多个假体样本的第二样本反射率比值R3/R4,确定多个假体样本中第二样本反射率比值的最小值b。最后,根据最大值a和最小值b确定阈值k。
作为一非限制性示例,通过光谱仪采集M个(M为大于1的整数)不同的真实皮肤样本各自在第一特征波段的第一样本反射率R1,以及在第二特征波段的第二样本反射率R2,计算M个真实皮肤样本中每个真实皮肤样本的第一样本反射率比值,即R1/R2,找到M个真实皮肤样本的第一样本反射率比值的最大值a。此外,采用与真实皮肤样本相同的处理方法,通过光谱仪采集N个(N为大于1的整数)不同种类的假体样本各自在第一特征波段的第三样本反射率 R3,以及在第二特征波段的第四样本反射率R4,计算N个假体样本中每个假体样本的第二样本反射率比值,即R3/R4,找到N个假体样本的第二样本反射率比值的最小值b。然后根据a和b确定阈值k的取值范围。例如阈值k的取值范围为:(a+b)/2>=k>=min(a,b),其中min表示取最小值函数。即阈值k大于或等于a和b两个值中较小的值,阈值k小于或等于a和b两个值的均值。阈值k的具体取值可以跟实际应用的需求确定,本实施例通过简单的阈值k设计即可将较多的活体和假体进行区分。
实施例四
图5是本申请另一实施例提供的一种活体检测方法的实现流程示意图,本实施例中的活体检测方法可由电子设备执行。如图5所示,活体检测方法可以包括步骤S410至步骤S440。应理解,实施例四与实施例三中相同之处,请详见前述实施例三的描述,此处不再赘述。
S410,获取包含人体皮肤的多光谱图像,所述多光谱图像包含至少一个像素。
S420,确定所述至少一个像素分别在第一特征波段和第二特征波段的第一多光谱响应值Dw1及第二多光谱响应值Dw2。
S430,根据所述多光谱图像分别获取第一特征波段和第二特征波段的第一光源光谱响应值Sw1以及第二光源光谱响应值Sw2。
S440,计算Dw1与Sw1的第一比值,计算Dw2与Sw2的第二比值,计算第一比值与第二比值的第三比值。
其中,计算Dw1与Sw1的第一比值,即计算至少一个像素在第一特征波段的反射率值,第一比值可以记为Rw1,Rw1=Dw1/Sw1;计算Dw2与Sw2的第二比值,即计算至少一个像素在第二特征波段的反射率值,第二比值可以记为Rw2,Rw2=Dw2/Sw2。然后,计算Rw1与Rw2的第三比值,第三比值可以记为Rw,Rw=Rw1/Rw2=(Dw1/Dw2)*(Sw2/Sw1)。
S450,将第一比值、第二比值和第三比值输入活体检测模型获得活体检测 结果。
其中,活体检测模型为经训练的用于判断待测人体是否为活体的检测模型。将第一比值Rw1、第二比值Rw2和第三比值Rw输入活体检测模型,模型可以输出待测人体为活体或假体的分类结果。
在本实施例中,活体检测模型可以包括机器学习或深度学习模型。例如,支持向量机模型、神经网络模型、贝叶斯分类器或随机森林等模型。本申请对活体检测模型不予具体限制。
在一些实现方式中,活体检测模型可以包括二分类模型,二分类模型的二分类结果包括待测人体为活体和待测人体为假体。例如,将[Rw1,Rw2,Rw1/Rw2]输入活体检测模型,模型的输出为1,表示待测人体为活体;输出为0,表示待测人体为假体。
在其他一些实现方式中,活体检测模型可以包括多分类活体检测模型,在这些实现方式中,活体检测模型可以对活体和/或假体进行更细的分类。例如,对假体作进一步细分以区分假体的不同种类或类别(如不同种类或类别的假体对应不同材质的假体)。本申请对活体检测模型的分类数量不作具体限制。
需要说明的是,在利用活体检测模型之前,还需要获取经训练的活体检测模型。作为一非限制性示例,获取经训练的活体检测模型的过程包括:获取多个真实皮肤样本各自的第一样本向量及对应标签,第一样本向量包括真实皮肤样本在第一特征波段的第一样本反射率值、在第二特征波段的第二样本反射率值、第一样本反射率值与第二样本反射率值的比值这三个特征;获取多个不同种类的假体样本的第二样本向量及对应标签,第二样本向量包括假体样本在第一特征波段的第三样本反射率值、在第二特征波段的第四样本反射率值、第三样本反射率值与第四样本反射率值的比值这三个特征;利用第一样本向量及对应标签和第二样本向量及对应标签作为训练样本,对活体检测模型进行训练,得到经训练的活体检测模型。这样,经训练的活体检测模型可以实现活体和假体的分类,也就是说,经训练的活体检测模型可以用于识别待测人体是否为活 体。应理解,作为一非限制性示例,获取第一样本反射率值、第二样本反射率值、第一样本反射率值与第二样本反射率值的比值、第三样本反射率值、第四样本反射率值、以及第三样本反射率值与第四样本反射率值的比值的过程,可以参见确定阈值k的相关描述。
在本实施例中,将波段比值Rw1/Rw2加入到两个特征波段的反射率特征中,组成了三维的特征组合向量,即[Rw1,Rw2,Rw1/Rw2],增加了特征的维度。将该特征组合向量输入活体检测模型输出活体检测结果,该活体检测结果由[Rw1,Rw2,Rw1/Rw2]中的三个特征共同决定,可以获得更准确的结果。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本申请一实施例还提供一种光源光谱获取装置。该光源光谱获取装置中未详细描述之处请详见前述实施例中方法的描述。
参见图6,图6是本申请实施例提供的一种光源光谱获取装置的示意框图。所述光源光谱获取装置包括:获取模块81、重构模块82、转换模块83和第一计算模块84。
其中,获取模块81,用于获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
重构模块82,用于根据所述多光谱图像重构RGB图像;
转换模块83,用于将所述RGB图像转换成灰度图像;
第一计算模块84,用于确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
可选的,作为一实现方式,所述灰度图像中各像素对应的灰度值根据该像素在所述RGB图像中的三通道数值计算得到。
作为该实现方式的一非限制性示例,根据公式deta=abs(1-G/B)+abs(1-R/B) 计算各像素对应的灰度值,其中,R、G和B表示RGB图像中各像素的三通道数值,即R值、G值和B值,abs表示绝对值函数。
可选的,作为一实现方式,如图7所示,所述光源光谱获取装置,还包括:阈值确定模块85。
阈值确定模块85,用于对所述灰度图像进行直方图统计,根据直方图统计结果中最小数值区间的区间参数确定阈值。
作为该实现方式的一非限制性示例,所述阈值确定模块85,具体用于:
根据直方图统计结果中最小数值区间的区间边界数值和像素占比确定阈值。
可选的,作为一实现方式,第一计算模块84,具体用于:
计算所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值的平均值,得到光源光谱响应值。
本申请一实施例还提供一种多光谱反射率图像获取装置。该多光谱反射率图像获取装置中未详细描述之处请详见前述实施一中方法的描述。
参见图8,图8是本申请实施例提供的一种多光谱反射率图像获取装置的示意框图。所述多光谱反射率图像获取装置包括:获取模块101、重构模块102、转换模块103、第一计算模块104和第二计算模块105。
获取模块101,用于获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
重构模块102,用于根据所述多光谱图像重构RGB图像;
转换模块103,用于将所述RGB图像转换成灰度图像;
第一计算模块104,用于确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值;
第二计算模块105,用于根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
可选的,作为一实现方式,所述灰度图像中各像素对应的灰度值根据该像 素在所述RGB图像中的三通道数值计算得到。
作为该实现方式的一非限制性示例,根据公式deta=abs(1-G/B)+abs(1-R/B)计算各像素对应的灰度值,其中,R、G和B表示RGB图像中各像素的三通道数值,即R值、G值和B值。
可选的,作为一实现方式,如图9所示,所述多光谱反射率图像获取装置,还包括:阈值确定模块106。
阈值确定模块106,用于对所述灰度图像进行直方图统计,根据直方图统计结果中最小数值区间的区间参数确定阈值。
作为该实现方式的一非限制性示例,所述阈值确定模块106,具体用于:
根据直方图统计结果中最小数值区间的区间边界数值和像素占比确定阈值。
可选的,作为一实现方式,第一计算模块104,具体用于:
计算所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值的平均值,得到光源光谱响应值。
可选的,作为一实现方式,第二计算模块105,具体用于:
将所述多光谱图像中每个像素的多光谱响应值除以所述光源光谱响应值,获得多光谱反射率图像。
本申请实施例还提供了一种电子设备,如图10所示,电子设备可以包括一个或多个处理器120(图10中仅示出一个),存储器121以及存储在存储器121中并可在一个或多个处理器120上运行的计算机程序122,例如,获取光源光谱和/或多光谱反射率图像的程序。一个或多个处理器120执行计算机程序122时可以实现光源光谱获取方法和/或多光谱反射率图像获取方法实施例中的各个步骤。或者,一个或多个处理器120执行计算机程序122时可以实现光源光谱获取装置和/或多光谱反射率图像获取装置实施例中各模块/单元的功能,此处不作限制。
本领域技术人员可以理解,图10仅仅是电子设备的示例,并不构成对电子设备的限定。电子设备可以包括比图示更多或更少的部件,或者组合某些部件, 或者不同的部件,例如电子设备还可以包括输入输出设备、网络接入设备、总线等。
在一个实施例中,所称处理器120可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在一个实施例中,存储器121可以是电子设备的内部存储单元,例如电子设备的硬盘或内存。存储器121也可以是电子设备的外部存储设备,例如电子设备上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,存储器121还可以既包括电子设备的内部存储单元也包括外部存储设备。存储器121用于存储计算机程序以及电子设备所需的其他程序和数据。存储器121还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介 质存储有计算机程序,所述计算机程序被处理器执行时实现可实现光源光谱获取方法实施例和/或多光谱反射率图像获取方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备可实现光源光谱获取方法实施例和/或多光谱反射率图像获取方法实施例中的步骤。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的实施例中,应该理解到,所揭露的装置/电子设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/电子设备实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的 形式实现。
集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、电载波信号、电信信号以及软件分发介质等。需要说明的是,计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。

Claims (14)

  1. 一种光源光谱获取方法,其特征在于,包括:
    获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
    根据所述多光谱图像重构RGB图像;
    将所述RGB图像转换成灰度图像;
    确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
  2. 一种多光谱反射率图像获取方法,其特征在于,包括:
    获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
    根据所述多光谱图像重构RGB图像;
    将所述RGB图像转换成灰度图像;
    确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值;
    根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
  3. 如权利要求1所述的方法,其特征在于,所述灰度图像中各像素对应的灰度值根据该像素在所述RGB图像中的三通道数值计算得到。
  4. 如权利要求2所述的方法,其特征在于,所述灰度图像中各像素对应的灰度值根据该像素在所述RGB图像中的三通道数值计算得到。
  5. 如权利要求1所述的方法,其特征在于,所述将所述RGB图像转换成灰度图像之后,还包括:
    对所述灰度图像进行直方图统计,根据直方图统计结果中最小数值区间的区间参数确定阈值。
  6. 如权利要求2所述的方法,其特征在于,所述将所述RGB图像转换成 灰度图像之后,还包括:
    对所述灰度图像进行直方图统计,根据直方图统计结果中最小数值区间的区间参数确定阈值。
  7. 如权利要求5所述的方法,其特征在于,所述根据直方图统计结果中最小数值区间的区间参数确定阈值,包括:
    根据直方图统计结果中最小数值区间的区间边界数值和像素占比确定阈值。
  8. 如权利要求6所述的方法,其特征在于,所述根据直方图统计结果中最小数值区间的区间参数确定阈值,包括:
    根据直方图统计结果中最小数值区间的区间边界数值和像素占比确定阈值。
  9. 如权利要求1所述的方法,其特征在于,所述根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值,包括:
    计算所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值的平均值,得到光源光谱响应值。
  10. 如权利要求2所述的方法,其特征在于,所述根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值,包括:
    计算所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值的平均值,得到光源光谱响应值。
  11. 一种光源光谱获取装置,其特征在于,包括:
    获取模块,用于获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
    重构模块,用于根据所述多光谱图像重构RGB图像;
    转换模块,用于将所述RGB图像转换成灰度图像;
    第一计算模块,用于确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值。
  12. 一种多光谱反射率图像获取装置,其特征在于,包括:
    获取模块,用于获取多光谱图像,确定所述多光谱图像中每个像素的多光谱响应值;
    重构模块,用于根据所述多光谱图像重构RGB图像;
    转换模块,用于将所述RGB图像转换成灰度图像;
    第一计算模块,用于确定所述灰度图像中灰度值小于阈值或灰度值小于或等于阈值的目标区域,根据所述目标区域在所述多光谱图像中对应的各像素的多光谱响应值计算光源光谱响应值;
    第二计算模块,用于根据所述多光谱图像中每个像素的多光谱响应值和所述光源光谱响应值,获取多光谱反射率图像。
  13. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至10任一项所述方法的步骤。
  14. 一种计算机存储介质,所述计算机存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述方法的步骤。
PCT/CN2022/094817 2021-05-26 2022-05-25 光源光谱和多光谱反射率图像获取方法、装置及电子设备 WO2022247840A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/373,729 US20240021021A1 (en) 2021-05-26 2023-09-27 Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110578372.2A CN113340817B (zh) 2021-05-26 2021-05-26 光源光谱和多光谱反射率图像获取方法、装置及电子设备
CN202110578372.2 2021-05-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/373,729 Continuation US20240021021A1 (en) 2021-05-26 2023-09-27 Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device

Publications (1)

Publication Number Publication Date
WO2022247840A1 true WO2022247840A1 (zh) 2022-12-01

Family

ID=77471615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094817 WO2022247840A1 (zh) 2021-05-26 2022-05-25 光源光谱和多光谱反射率图像获取方法、装置及电子设备

Country Status (3)

Country Link
US (1) US20240021021A1 (zh)
CN (1) CN113340817B (zh)
WO (1) WO2022247840A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115151A (zh) * 2023-10-23 2023-11-24 深圳市德海威实业有限公司 基于机器视觉的sim卡座缺陷识别方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113340817B (zh) * 2021-05-26 2023-05-05 奥比中光科技集团股份有限公司 光源光谱和多光谱反射率图像获取方法、装置及电子设备
CN114463792B (zh) * 2022-02-10 2023-04-07 厦门熵基科技有限公司 一种多光谱识别方法、装置、设备及可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080046217A1 (en) * 2006-02-16 2008-02-21 Clean Earth Technologies, Llc Method for Spectral Data Classification and Detection in Diverse Lighting Conditions
CN103234915A (zh) * 2013-01-23 2013-08-07 北京交通大学 一种基于多光谱检测的波段选择方法
CN103268499A (zh) * 2013-01-23 2013-08-28 北京交通大学 基于多光谱成像的人体皮肤检测方法
CN110046564A (zh) * 2019-04-02 2019-07-23 深圳市合飞科技有限公司 一种多光谱活体指纹识别设备及识别方法
CN111368587A (zh) * 2018-12-25 2020-07-03 Tcl集团股份有限公司 场景检测方法、装置、终端设备及计算机可读存储介质
CN113297977A (zh) * 2021-05-26 2021-08-24 奥比中光科技集团股份有限公司 活体检测方法、装置及电子设备
CN113340816A (zh) * 2021-05-26 2021-09-03 奥比中光科技集团股份有限公司 光源光谱和多光谱反射率图像获取方法、装置及电子设备
CN113340817A (zh) * 2021-05-26 2021-09-03 奥比中光科技集团股份有限公司 光源光谱和多光谱反射率图像获取方法、装置及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080046217A1 (en) * 2006-02-16 2008-02-21 Clean Earth Technologies, Llc Method for Spectral Data Classification and Detection in Diverse Lighting Conditions
CN103234915A (zh) * 2013-01-23 2013-08-07 北京交通大学 一种基于多光谱检测的波段选择方法
CN103268499A (zh) * 2013-01-23 2013-08-28 北京交通大学 基于多光谱成像的人体皮肤检测方法
CN111368587A (zh) * 2018-12-25 2020-07-03 Tcl集团股份有限公司 场景检测方法、装置、终端设备及计算机可读存储介质
CN110046564A (zh) * 2019-04-02 2019-07-23 深圳市合飞科技有限公司 一种多光谱活体指纹识别设备及识别方法
CN113297977A (zh) * 2021-05-26 2021-08-24 奥比中光科技集团股份有限公司 活体检测方法、装置及电子设备
CN113340816A (zh) * 2021-05-26 2021-09-03 奥比中光科技集团股份有限公司 光源光谱和多光谱反射率图像获取方法、装置及电子设备
CN113340817A (zh) * 2021-05-26 2021-09-03 奥比中光科技集团股份有限公司 光源光谱和多光谱反射率图像获取方法、装置及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115151A (zh) * 2023-10-23 2023-11-24 深圳市德海威实业有限公司 基于机器视觉的sim卡座缺陷识别方法
CN117115151B (zh) * 2023-10-23 2024-02-02 深圳市德海威实业有限公司 基于机器视觉的sim卡座缺陷识别方法

Also Published As

Publication number Publication date
US20240021021A1 (en) 2024-01-18
CN113340817A (zh) 2021-09-03
CN113340817B (zh) 2023-05-05

Similar Documents

Publication Publication Date Title
WO2022247840A1 (zh) 光源光谱和多光谱反射率图像获取方法、装置及电子设备
Brooks et al. Unprocessing images for learned raw denoising
US8855412B2 (en) Systems, methods, and apparatus for image processing, for color classification, and for skin color detection
Lou et al. Color Constancy by Deep Learning.
JP3767541B2 (ja) 光源推定装置、光源推定方法、撮像装置および画像処理方法
US10641658B1 (en) Method and system for hyperspectral light field imaging
CN112580433A (zh) 一种活体检测的方法及设备
KR20140058674A (ko) 본질 이미지들을 이용하는 디지털 이미지 신호 압축 시스템 및 방법
WO2023273411A1 (zh) 一种多光谱数据的获取方法、装置及设备
CN106895916B (zh) 一种单次曝光拍摄获取多光谱图像的方法
Mazin et al. Estimation of illuminants from projections on the planckian locus
CN113340816B (zh) 光源光谱和多光谱反射率图像获取方法、装置及电子设备
CN113297977B (zh) 活体检测方法、装置及电子设备
CN104766068A (zh) 一种多规则融合的随机游走舌像提取方法
CN112651945A (zh) 一种基于多特征的多曝光图像感知质量评价方法
CN113297978B (zh) 活体检测方法、装置及电子设备
CN115018820A (zh) 基于纹理加强的乳腺癌多分类方法
Conni et al. The Effect of Camera Calibration on Multichannel Texture Classification.
CN110675366B (zh) 基于窄带led光源估计相机光谱灵敏度的方法
Yuan et al. Improved gamut-constrained illuminant estimation by combining modified category correlation
Yang et al. Fuzzy neural system for estimating the color temperature of digitally captured image with fpga implementation
CN117710344A (zh) 一种印品色相稳定性检测方法、装置及电子设备
Xiong et al. Modeling the Uncertainty in Inverse Radiometric Calibration
Wannous et al. Design of a customized pattern for improving color constancy across camera and illumination changes
CN118018863A (zh) 对图像进行白平衡的方法及装置、计算机系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810564

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22810564

Country of ref document: EP

Kind code of ref document: A1