CN113340816A - Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment - Google Patents

Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment Download PDF

Info

Publication number
CN113340816A
CN113340816A CN202110577248.4A CN202110577248A CN113340816A CN 113340816 A CN113340816 A CN 113340816A CN 202110577248 A CN202110577248 A CN 202110577248A CN 113340816 A CN113340816 A CN 113340816A
Authority
CN
China
Prior art keywords
image
multispectral
value
gray
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110577248.4A
Other languages
Chinese (zh)
Other versions
CN113340816B (en
Inventor
刘敏
龚冰冰
师少光
黄泽铗
张丁军
江隆业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202110577248.4A priority Critical patent/CN113340816B/en
Publication of CN113340816A publication Critical patent/CN113340816A/en
Application granted granted Critical
Publication of CN113340816B publication Critical patent/CN113340816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Spectrometry And Color Measurement (AREA)

Abstract

The application is applicable to the technical field of multispectral detection, and particularly relates to a method and a device for acquiring a light source spectrum and a multispectral reflectivity image and electronic equipment, wherein the method for acquiring the light source spectrum comprises the following steps: acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image; acquiring an RGB image, and matching the RGB image with the multispectral image to obtain a matched RGB image; converting the matched RGB image into a gray image; and determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image. The embodiment of the application can improve the estimation precision of the ambient light spectrum.

Description

Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
Technical Field
The present disclosure relates to the field of multispectral detection technologies, and in particular, to a method and an apparatus for acquiring a light source spectrum, and an electronic device, and a method and an apparatus for acquiring a multispectral reflectance image.
Background
The multispectral imaging and multispectral analysis technology can acquire information of spatial image dimensionality and information of spectral dimensionality. The multispectral imaging principle is that incident light is divided into a plurality of narrow-band light and the narrow-band light is imaged on a multispectral detector respectively, so that images of different spectral bands are obtained, and multispectral three-dimensional data are formed. The multispectral data processing method is to extract textures from image dimensions, such as Local Binary Pattern (LBP) texture extraction, gray level co-occurrence matrix texture extraction, Histogram of Oriented Gradient (HOG) texture extraction, and the like; material composition and color-related features are extracted from the spectral dimensions. And finally, fusing the image dimension information and the spectrum dimension information to analyze the target object.
If the spectrum of the environment light where the multispectral image is shot is known, the response value spectrum of each pixel of the multispectral image is divided by the spectrum of the environment light, so that the multispectral reflectivity image can be obtained, and the multispectral reflectivity image is related to the property of the shot object and does not change along with the light source. The spectrum of the ambient light, i.e., the light source spectrum, refers to a light source spectrum incident on the surface of the multispectral photographic subject. Therefore, the acquisition of the environment light spectrum with higher precision is important for acquiring the multispectral reflectivity image with high precision. Therefore, how to obtain the spectrum of the ambient light with high precision is an urgent problem to be solved.
Disclosure of Invention
In view of this, embodiments of the present application provide a light source spectrum acquiring method, a light source spectrum acquiring device, and an electronic device, and a multispectral reflectance image acquiring method, a multispectral reflectance image acquiring device, and an electronic device, which are capable of acquiring a spectrum of ambient light with higher accuracy.
In a first aspect, an embodiment of the present application provides a light source spectrum obtaining method, including:
acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image;
acquiring an RGB image, and matching the RGB image with the multispectral image to obtain a matched RGB image;
converting the matched RGB image into a gray image;
and determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
In this embodiment, the target area with the gray value smaller than the threshold value or the gray value smaller than or equal to the threshold value is found out, and the light source spectrum response value is calculated based on the target area, so that the accuracy of the acquired light source spectrum can be improved.
As an implementation manner of the first aspect, the gray value corresponding to each pixel in the gray image is obtained by calculating three channels of numerical values of the pixel in the matched RGB image.
As an example of this implementation, the gray value corresponding to each pixel is calculated according to the formula deta ═ abs (1-G/B) + abs (1-R/B), where R, G and B represent three channel values of each pixel in the matched RGB image, i.e., R value, G value, and B value, and abs represents an absolute value function.
As an implementation manner of the first aspect, after converting the matched RGB image into a grayscale image, the method further includes:
and determining a threshold value according to the gray-scale image.
As an implementation, determining a threshold from the grayscale image includes: and carrying out histogram statistics on the gray level image, and determining a threshold value according to the interval parameter of the minimum numerical value interval in the histogram statistical result.
As an implementation manner, the determining a threshold according to the interval parameter of the minimum value interval in the histogram statistic result includes:
and determining a threshold value according to the interval boundary value and the pixel ratio of the minimum value interval in the histogram statistical result.
As an implementation manner of the first aspect, the calculating a light source spectral response value according to a multispectral response value of each pixel corresponding to the target region in the multispectral image includes:
and calculating the average value of the multispectral response values of the pixels corresponding to the target area in the multispectral image to obtain a light source spectral response value.
In a second aspect, an embodiment of the present application provides a method for acquiring a multispectral reflectance image, including:
acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image;
acquiring an RGB image, and matching the RGB image with the multispectral image to obtain a matched RGB image;
converting the matched RGB image into a gray image;
determining a target area with a gray value smaller than a threshold value or a gray value smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image;
and acquiring a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
In this embodiment, the target area with the gray value smaller than the threshold value or the gray value smaller than or equal to the threshold value is found out, and the light source spectrum response value is calculated based on the target area, so that the accuracy of the acquired light source spectrum can be improved, and the accuracy of the multispectral reflectivity image is improved.
As an implementation manner of the first aspect, the gray value corresponding to each pixel in the gray image is obtained by calculating three channels of numerical values of the pixel in the RGB image.
As an example of this implementation, the gray value corresponding to each pixel is calculated according to the formula deta ═ abs (1-G/B) + abs (1-R/B), where R, G and B represent three channel values of each pixel in the matched RGB image, i.e., R value, G value, and B value, and abs represents an absolute value function.
As an implementation manner of the first aspect, after converting the matched RGB image into a grayscale image, the method further includes:
and determining a threshold value according to the gray-scale image.
As an implementation, determining a threshold from the grayscale image includes: and carrying out histogram statistics on the gray level image, and determining a threshold value according to the interval parameter of the minimum numerical value interval in the histogram statistical result.
As an implementation manner, the determining a threshold according to the interval parameter of the minimum value interval in the histogram statistic result includes:
and determining a threshold value according to the interval boundary value and the pixel ratio of the minimum value interval in the histogram statistical result.
As an implementation manner of the first aspect, the calculating a light source spectral response value according to a multispectral response value of each pixel corresponding to the target region in the multispectral image includes:
and calculating the average value of the multispectral response values of the pixels corresponding to the target area in the multispectral image to obtain a light source spectral response value.
As an implementation manner of the first aspect, the acquiring a multispectral reflectivity image according to a multispectral response value of each pixel in the multispectral image and the spectral response value of the light source includes:
and dividing the multispectral response value of each pixel in the multispectral image by the light source spectral response value to obtain a multispectral reflectivity image.
In a third aspect, an embodiment of the present application provides a light source spectrum acquisition apparatus, including:
the acquisition module is used for acquiring a multispectral image and determining a multispectral response value of each pixel in the multispectral image;
the matching module is used for acquiring an RGB image, matching the RGB image with the multispectral image and acquiring a matched RGB image;
the conversion module is used for converting the matched RGB image into a gray image;
the first calculation module is used for determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
In a fourth aspect, an embodiment of the present application provides a multispectral reflectance image capturing apparatus, including:
the acquisition module is used for acquiring a multispectral image and determining a multispectral response value of each pixel in the multispectral image;
the matching module is used for acquiring an RGB image, matching the RGB image with the multispectral image and acquiring a matched RGB image;
the conversion module is used for converting the matched RGB image into a gray image;
the first calculation module is used for determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image;
and the second calculation module is used for acquiring a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the light source spectrum acquisition method according to the first aspect or any of the implementations of the first aspect and/or implementing the multispectral reflectance image acquisition method according to the second aspect or any of the implementations of the second aspect when executing the computer program.
In a sixth aspect, the present embodiments provide a computer-readable storage medium, which stores a computer program, which when executed by a processor implements the light source spectrum acquisition method according to the first aspect or any implementation manner of the first aspect, and/or implements the multispectral reflectance image acquisition method according to the second aspect or any implementation manner of the second aspect.
In a seventh aspect, embodiments of the present application provide a computer program product, which when run on an electronic device, causes the electronic device to execute the light source spectrum acquisition method according to the first aspect or any implementation manner of the first aspect, and/or implement the multispectral reflectivity image acquisition method according to the second aspect or any implementation manner of the second aspect.
It is to be understood that, for the beneficial effects of the third aspect to the seventh aspect, reference may be made to the description of the first aspect or the second aspect, and details are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for acquiring a multispectral reflectance image according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a statistical result of histogram statistics on a grayscale image according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating an implementation of another multi-spectral reflectance image acquisition method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating another method for detecting a living body according to an embodiment of the present application;
fig. 6 is a schematic diagram of a light source spectrum acquisition apparatus according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another light source spectrum acquisition apparatus provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of an apparatus for acquiring a multispectral reflectance image according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of another multispectral reflectance image capture device according to an embodiment of the present application;
fig. 10 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Further, in the description of the present application, "a plurality" means two or more. The terms "first," "second," "third," and "fourth," etc. are used merely to distinguish one description from another, and are not to be construed as indicating or implying relative importance.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The light source estimation method generally includes the following two methods: first, a white-world-based light source spectrum estimation method. The method finds the brightest region of the multispectral image and obtains the average spectrum of the region as the light source spectrum. The method has better reduction effect when the brightest area is a white area. Second, a light source spectrum estimation method based on the gray world. The method calculates the average spectrum of the whole multispectral image as the light source spectrum. The method has good reduction effect on scenes with rich colors.
Both methods estimate the light source spectrum based on a fuzzy prediction of the entire multispectral image. For example, the brightest region in the multispectral image is taken as the light source spectrum based on the light source spectrum estimation of the white world, and if the brightest region is not white, the estimated error is large. For another example, based on the estimation of the light source spectrum in the gray world, the average value of all pixels in the multispectral image is taken as the light source spectrum, and if the white area in the image is few and has a single color with a large area, the estimated error is large.
The two methods have low adaptability and larger error in the application scene of using different light sources. In order to solve the technical problem of how to estimate the ambient light or the light source spectrum (or referred to as the ambient light or the light source approximate spectrum) more accurately, embodiments of the present application provide a method for obtaining a multispectral reflectance image, which can determine the light source spectrum according to multispectral information of a light source region by obtaining the multispectral image and positioning the light source region in the multispectral image.
Example one
Fig. 1 is a schematic flow chart illustrating an implementation of a method for acquiring a multispectral reflectivity image according to an embodiment of the present disclosure, where the method for acquiring a multispectral reflectivity image according to the present disclosure can be executed by an electronic device. Electronic devices include, but are not limited to, computers, tablets, servers, cell phones, or multispectral cameras, etc. The server includes but is not limited to a stand-alone server or a cloud server, etc. The multispectral reflectivity image acquisition method in the embodiment is suitable for the situation that the light source spectrum (or the light source approximate spectrum) in the current environment needs to be estimated. As shown in fig. 1, the multispectral reflectance image acquisition method may include steps S110 to S150.
S110, acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image.
Wherein the multispectral image is a single multispectral image. Multispectral images of any scene (where ambient light or light sources are present) are collected by a multispectral camera. The single multispectral image contains information including response value information for each pixel, the response value information representing the response of light reflected to the multispectral camera on the multispectral camera. The response value information varies with the intensity of the light source, the shape of the light source spectrum, and the lighting direction of the light source.
The number of channels of the multispectral camera may be several to dozens, for example eight channels, nine channels, sixteen channels, or the like. The number of channels and the wavelength band of each channel of the multispectral camera are not particularly limited in this embodiment. For better understanding of the present embodiment, a nine-channel multispectral camera is used as an example of the multispectral camera, and it should be understood that the exemplary description should not be construed as a specific limitation to the present embodiment.
As a non-limiting example, the multispectral camera is a nine-channel multispectral camera, and each pixel of the nine-channel multispectral camera can obtain nine response values of x1, x2, x3, x4, x5, x6, x7, x8, and x 9. That is, the multispectral response value for each pixel is nine response values for nine channels. Wherein x1 represents the response value of the first channel with q1 response curve characteristics; x2 represents the response value of the second channel with q2 response curve characteristics; x3 represents the response value of the third channel with q3 response curve characteristics; ...; x9 represents the response value of the third channel with q9 response curve characteristics. That is, xi represents the response value of the ith channel having the characteristic of qi response curve, and i is an integer from 1 to 9.
And S120, reconstructing a Red Green Blue (RGB) image according to the multispectral image.
Each pixel in the RGB image has response values of three channels, i.e., an R value of an R channel, a G value of a G channel, and a B value of a B channel. And reconstructing the RGB image according to the multispectral image, namely calculating the R value, the G value and the B value of each pixel according to the multispectral response value of each pixel in the multispectral image.
As an implementation manner, step S120, reconstructing an RGB image according to the multispectral image, includes the following steps S121 to S124.
And S121, acquiring Quantum Efficiency (QE) response curves of nine channels of the multispectral camera.
Specifically, QE response curve matrices of nine channels of the multispectral camera are acquired, and the QE response curve matrices may be denoted as q1, q2, q3, q4, q5, q6, q7, q8, and q 9. The matrix q1 is a response curve of the first channel, the matrix q2 is a response curve of the second channel, and the matrix q9 is a response curve of the ninth channel. That is, the matrix qj is the response curve of the j-th channel, and j is an integer from 1 to 9. It should be noted that for a fixed multispectral camera (or multispectral hardware), these response curves can be obtained through testing. After the curves are obtained through testing, the curves can be stored in a memory of the electronic equipment in advance and can be called when needed.
And S122, acquiring tristimulus value curves, namely an r curve, a g curve and a b curve.
And acquiring a spectral tristimulus value curve of a real trichromatic system (CIE 1931RGB system), wherein the spectral tristimulus value curve comprises an r curve, a g curve and a b curve. It should be noted that these curves are known and can be found from the CIE standard. The three curves are pre-stored in the memory of the electronic device and can be called when needed.
And S123, performing linear fitting on the tristimulus value curve by using the QE response curve of the nine channels to obtain fitting parameters.
Specifically, the r curve, the g curve and the b curve are respectively linearly fitted with response curves of nine channels, namely q1, q2, q3, q4, q5, q6, q7, q8 and q9 curves by using a linear fitting method. The formula for the linear fit is as follows:
r=a1*q1+a2*q2+a3*q3+a4*q4+a5*q5+a6*q6+a7*q7+a8*q8+a9*q9;
g=b1*q1+b2*q2+b3*q3+b4*q4+b5*q5+b6*q6+b7*q7+b8*q8+b9*q9;
b=c1*q1+c2*q2+c3*q3+c4*q4+c5*q5+c6*q6+c7*q7+c8*q8+c9*q9。
solving the above equation by partial least squares to obtain the values of the fitting parameters, i.e. the values of the following parameters:
a1,a2,a3,a4,a5,a6,a7,a8,a9;
b1,b2,b3,b4,b5,b6,b7,b8,b9;
c1,c2,c3,c4,c5,c6,c7,c8,c9。
and S124, performing fitting calculation according to the fitting parameters and the multispectral response value of each pixel to obtain an R value, a G value and a B value of each pixel.
Specifically, according to step S110, the nine-channel response value of a certain pixel in the multispectral image is determined as: x1, x2, x3, x4, x5, x6, x7, x8 and x9, calculating a fitting parameter according to the step S123, and performing fitting calculation according to the fitting parameter and the nine-channel response value of the pixel in the step S124 to obtain an R value, a G value and a B value of the pixel. The formula is as follows:
R=a1*x1+a2*x2+a3*x3+a4*x4+a5*x5+a6*x6+a7*x7+a8*x8+a9*x9;
G=b1*x1+b2*x2+b3*x3+b4*x4+b5*x5+b6*x6+b7*x7+b8*x8+b9*x9;
B=c1*x1+c2*x2+c3*x3+c4*x4+c5*x5+c6*x6+c7*x7+c8*x8+c9*x9。
and obtaining the R value, the G value and the B value of each pixel in the multispectral image through fitting calculation, and obtaining an RGB image corresponding to the whole multispectral image, namely reconstructing the RGB image according to the multispectral image.
In other embodiments, after the RGB image is reconstructed, the RGB image may be white-balanced to obtain a white-balanced RGB image, which may be referred to as an RGB _ wb image. In these embodiments, in the subsequent step S130, the RGB _ wb image is converted into a grayscale image.
In some implementations, the RGB image may be white-balanced by directly using an existing white balance method, such as a gray world method, a white world method, or an automatic threshold method, to obtain a white-balanced RGB image RGB _ wb. By means of the white balance step, the area with the degree value close to 0 obtained in the subsequent step S140 can better correspond to the gray or white area, and the area selection result can be obtained more accurately, so that the more accurate light source spectrum can be obtained.
And S130, converting the RGB image into a gray image.
Among them, the gray image may be referred to as a deta image. And calculating the gray value corresponding to each pixel in the gray image according to the multichannel numerical value of the pixel in the RGB image.
According to the R value, G value and B value of R, G and B three channels of each pixel in the RGB image, the gray value (or deta value) of the pixel is calculated, and according to the gray value (or deta value) of each pixel, the gray image (or deta image) corresponding to the RGB image is obtained. That is, the gray value (or the deta value) corresponding to each pixel of the gray image (or the deta image) is calculated according to the multi-channel values of the pixel in the RGB image, i.e., the R value, the G value, and the B value.
As a non-limiting example, R, G and B channels of an RGB image are extracted, for each pixel, a deta value corresponding to the pixel is obtained according to the formula deta ═ abs (1-G/B) + abs (1-R/B), and the deta value is assigned to the pixel of the grayscale image as a grayscale value, and the deta image is obtained according to the grayscale value of each pixel, where abs in the formula represents an absolute value function.
And S140, determining a target area with the gray value smaller than a threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
Specifically, a target region in the grayscale image (or the deta image) in which the grayscale value (or the deta value) is smaller than a threshold is determined. The threshold value t may be a value close to 0. And calculating the spectral response value of the light source according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
In this embodiment, the area in which the degree value is close to 0 in the degree image is found in order to find the area in which all three values of the R value, the G value, and the B value are close. When the three values of R, G, and B are close to each other, the three values may be white regions or gray regions with different gradations. Since the reflectance of the white area and/or the gray area is a straight line, the incident light source spectral curve and the reflected light source spectral curve coincide with each other, and only the difference in luminance is present. Thus, the spectrum of the white region and/or the gray region can reflect the spectrum of the light source more accurately.
As an implementation manner of this embodiment, histogram statistics is performed on the deta image, that is, the distribution of the deta data in the deta image is subjected to histogram statistics, and the threshold t is determined according to the histogram statistical result of the deta image. Specifically, after histogram statistics is performed on the deta image, the threshold t is determined according to the interval parameter of the minimum numerical interval in the histogram statistical result. The interval parameters include, but are not limited to, one or more of the number of pixels, pixel proportion, interval boundary value, etc.
As a non-limiting example, the statistical process of histogram statistics on grayscale images (or deta images) is as follows: firstly, finding out a minimum value M0 and a maximum value M10 of the gray value (or deta value); then, dividing the minimum value M0 and the maximum value M10 into 10 ranges (or called value intervals), wherein the 10 value intervals are as follows from small to large: [ M0, M1), [ M1, M2), [ M2, M3), [ M3, M4), [ M4, M5), [ M5, M6), [ M6, M7), [ M7, M8), [ M8, M9), [ M9, M10], where M0, M1, M2, M3, M4, M5, M6, M7, M8, M9, M10 may be referred to as the interval value M. The statistical gray value is greater than or equal to M0 and less than the number of pixels of M1, namely the number of pixels in the first value interval or the minimum value interval, and the proportion of the number of pixels to the total number of pixels is h1, namely the pixel proportion h of the first value interval is h 1. The pixel ratios h from the second numerical interval to the tenth numerical interval obtained by the same method are as follows in sequence: h2, h3, h4, h5, h6, h7, h8, h9 and h 10. A schematic diagram of the statistical result of histogram statistics on the deta image is shown in fig. 2. For the first or minimum value interval, t ═ M0+ (M1-M0) × h 1. The t values corresponding to each numerical interval are different and are related to the interval numerical value M and the h value of the numerical interval. In this embodiment, it is only necessary to find the t value of the first value interval, i.e. determine the t value that makes deta close to 0.
As another non-limiting example, the minimum value M0 and the maximum value M10 of the gray value (or deta value) are found first; the minimum value M0 and the maximum value M10 are then divided into 10 ranges (or called value ranges). And determining an interval parameter of a first value interval, namely a minimum value interval, specifically, counting the number of pixels with a deta value of more than or equal to M0 and less than M1, namely the number of pixels in the minimum value interval, wherein the proportion of the number of pixels to the total number of pixels is h1, namely the pixel proportion of the first value interval is h 1. Finally, the preset value t is determined according to M0, M1 and h 1. For example, t ═ M0+ (M1-M0) × h 1. Thus, a value of t is determined that brings the deta close to 0.
After the threshold t is determined, counting a target area with the deta < t, namely finding a target area with the deta value close to 0 in the gray image, and calculating the average value of each channel in the nine channels of each corresponding pixel in the multispectral image. The average multispectral data of the target region is the approximate light source spectrum. For example, a target region with deta < t in the grayscale image includes N pixels, where N is a positive integer, acquiring multispectral response values of nine channels of the N pixels corresponding to the target region in the multispectral image, and calculating an average value of the multispectral response values of the N pixels for each channel in the nine channels, where the average value is used as the light source spectral response value. Each of the N pixels corresponds to a multispectral response value for nine channels, so the average is nine values corresponding to nine channels.
In other implementations of this embodiment, after the threshold t is determined, the target region of deta < ═ t is counted.
It should be noted that the number of divisions of the histogram statistic time value interval in this embodiment may be an empirical value, and may be obtained from experience of existing shooting data, for example. The more the interval is divided, the more the obtained deta value of the target area is close to 0, and the more accurate the obtained light source spectrum is theoretically, but when the interval is divided enough, the target area with the deta value close to 0 only comprises a few pixels, the noise of the obtained light source spectrum is too large instead, and therefore the dividing number of the interval needs to be considered in a compromise way, which cannot be too large or too small. This is not a particular limitation of the present application.
The plurality of value intervals divided during histogram statistics may include one or a combination of a plurality of left-open/right-closed intervals, left-closed/right-open intervals, left-open/right-open intervals, left-closed/right-closed intervals, and the like. This is not a particular limitation of the present application.
S150, acquiring a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
As an implementation manner of this embodiment, the multispectral response value of each pixel in the multispectral image is determined according to step S110, and the light source spectral response value is determined according to step S140, so that in step S150, the multispectral response value of each pixel in the multispectral image is divided by the light source spectral response value to obtain the multispectral reflectivity image.
As a non-limiting example, the nine-channel multispectral response value for a pixel in the multispectral image is x1, x2, x3, x4, x5, x6, x7, x8, x 9. The light source spectral response values, i.e. the average of the multispectral response values of the nine channels, are y1, y2, y3, y4, y5, y6, y7, y8, y 9. And calculating x1/y1, x2/y2, x3/y3, x4/y4, x5/y5, x6/y6, x7/y7, x8/y8 and x9/y9 to obtain the reflectivity of each pixel, and obtaining a multispectral reflectivity map corresponding to the multispectral image after calculating the reflectivity of each pixel.
In the embodiment, the advantage that the multispectral image can restore the RGB image is used, the white or gray area is found from the restored RGB image, because the spectrum of the white or gray area in the multispectral image is the spectrum closest to the light source, the step of area selection is added in the scheme, the average spectrum of the area is used as the approximate spectrum of the light source, the estimated light source spectrum precision is higher, the method is applicable to scenes adopting different light sources, and the multispectral reflectivity image calculated based on the light source spectrum is more accurate.
Example two
Fig. 3 is a schematic flow chart illustrating an implementation of a method for acquiring a multispectral reflectivity image according to another embodiment of the present disclosure, where the method for acquiring a multispectral reflectivity image according to the present embodiment can be executed by an electronic device. As shown in fig. 3, the multispectral reflectance image acquisition method may include steps S210 to S250. It should be understood that the detailed description of the method above is omitted for the sake of brevity.
S210, acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image.
S220, obtaining the RGB image, matching the RGB image with the multispectral image, and obtaining the matched RGB image.
In one embodiment, the RGB image is reconstructed from the multispectral image, so that the RGB image and the multispectral image have the same viewing angle. In the second embodiment, the RGB image of the same scene is acquired by another camera, i.e. the color camera, so the RGB image and the multispectral image acquired by the multispectral camera have different viewing angles, and the matching operation is required.
As an implementation manner of this embodiment, the pixel points in the RGB image correspond to the pixel points in the multispectral image one to one, for example, a certain object in the RGB image corresponds to the pixel point of the object in the multispectral image. When a gray white area is found through the RGB image, the gray white area in the multispectral image is found through the corresponding relation, and the average value of the multichannel response of the area is calculated to be used as an approximate light source spectral response value.
In this embodiment, the color camera and the multispectral camera are adjacently arranged, and the closer the positions of the color camera and the multispectral camera are, the closer the field of view shot by the receiving end or the imaging end of the color camera and the multispectral camera are, so that the RGB image and the multispectral image have more corresponding pixel points in the matching process, and the accuracy of the light source spectrum estimation result can be improved.
And S230, converting the matched RGB image into a gray image.
And calculating the gray value corresponding to each pixel in the gray image according to the multichannel numerical value of the pixel in the matched RGB image.
S240, determining a target area with the gray value smaller than a threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
And S250, acquiring a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
The difference between the second embodiment and the first embodiment is that steps S120 and S220 are different, and other steps are the same or similar. In the first embodiment, the RGB image is obtained by the multispectral image reconstruction, so that the RGB image and the multispectral image are obtained by the same camera, and both images have the same viewing angle, so that the estimated spectral accuracy of the light source in the first embodiment is higher than that in the second embodiment.
The multispectral reflectivity images obtained by the method of the first embodiment and the second embodiment are used for in vivo detection or applied to other models, and the multispectral reflectivity images have better robustness when applied under different light sources. For example, the living body detection is carried out based on the multispectral reflectivity image, the analysis result does not change along with the change of the light source, and the robustness is good. Next, a living body detection method is described.
Because the spectral characteristics of the real human skin and the prosthesis (such as a fake finger or a fake mask) in a plurality of characteristic wave bands are greatly different, the wave band ratio method provided by the application can be used for applying the spectral characteristics of the skin to eliminate most of the prosthesis and sufficiently meet the precision requirement of common products. For example, the characteristics of real human skin include: in the 420 to 440nm (unit: nanometer) wave band, the skin has peculiar melanin absorption; at the 550 to 590nm band, the skin-specific hemoglobin absorption; skin-specific moisture absorption in the 960 to 980nm band; in the 800 to 850nm band, the skin absorbs weakly (i.e., the reflection is high), etc. For a payment consumption scene with higher precision requirement, the band ratio method can be used as the first step judgment of multispectral biopsy, most of prostheses are excluded, and when the prostheses with extremely high precision are encountered, models with higher precision such as machine learning or deep learning are used for judgment. The wave band is simpler than the calculation process of the method, and is less influenced by factors such as ambient light, dark noise and the like.
EXAMPLE III
Fig. 4 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to another embodiment of the present application, where the method for detecting a living body in this embodiment can be executed by an electronic device. As shown in fig. 4, the living body detecting method may include steps S310 to S340.
S310, acquiring a multispectral image containing human skin, wherein the multispectral image contains at least one pixel.
The human skin includes, but is not limited to, skin of a certain part or a certain area not covered by human body, such as human face skin, skin of a certain area of human face, skin of fingers, and the like.
Acquiring a multispectral image containing human skin by a multispectral camera. The multispectral image includes at least one pixel. It should be noted that the at least one pixel is a pixel for imaging human skin.
And S320, determining a first multispectral response value Dw1 and a second multispectral response value Dw2 of the at least one pixel in the first characteristic wave band and the second characteristic wave band respectively.
Wherein, according to the multispectral image, a first multispectral response value Dw1 of at least one pixel in a first characteristic wave band and a second multispectral response value Dw2 in a second characteristic wave band are determined.
As will be understood from the description of the first embodiment, the multispectral image includes a plurality of channels of multispectral response values for each pixel. In the first embodiment, the number and band of channels are not limited, and in the third embodiment, the multiple channels include at least two channels of the first eigenband and the second eigenband, and the number and band of other channels are not limited. That is, in the third embodiment, the number of channels of the multispectral camera is at least 2, and at least two channels of the first characteristic band and the second characteristic band are included. The multispectral image comprises multispectral response values for at least two channels of each pixel, namely a first multispectral response value Dw1 comprising a first characteristic band and a second multispectral response value Dw2 comprising a second characteristic band. Thus, a first multispectral response value Dw1 of a first characteristic band and a second multispectral response value Dw2 of a second characteristic band of at least one pixel corresponding to human skin may be determined from the multispectral image.
In the third embodiment, two representative wavelength bands, namely the first characteristic wavelength band w1 and the second characteristic wavelength band w2, can be selected according to the reflection spectrum characteristics of the real human skin.
In some implementations, the first characteristic band w1 is selected to be an absorption peak band specific to real human skin where there is a large difference in reflectivity between the prosthesis and the real human skin. For example, the 420 to 440nm band or a band within the band, which is a melanin absorption band specific to the skin of a real human body; as another example, a wavelength band of 550 to 590nm or a certain wavelength band within the wavelength band, which is a hemoglobin absorption wavelength band specific to real human skin; for example, 960-980 nm band or a band within the band, which is a moisture absorption band specific to real human skin.
In some implementations, the second characteristic wavelength band w2 is selected to be a non-absorption peak wavelength band of real human skin, i.e., a wavelength band where real human skin absorbs weakly (or reflects highly), such as the 800 to 850nm wavelength band or a wavelength band within this wavelength band.
S330, respectively obtaining a first light source spectral response value Sw1 and a second light source spectral response value Sw2 of a first characteristic wave band and a second characteristic wave band according to the multispectral image.
The first light source spectral response value Sw1 of the first characteristic wave band and the second light source spectral response value Sw2 of the second characteristic wave band are obtained according to the multispectral image.
In some implementations of the third embodiment, the first spectral response value Sw1 of the multispectral image at the first characteristic band and the second spectral response value Sw2 at the second characteristic band may be obtained by using the prior art.
In other implementations of the third embodiment, the method for obtaining the spectral response values of the light sources described in the first and second embodiments may be used to obtain the first spectral response value Sw1 of the light source with the first characteristic wavelength band and the second spectral response value Sw2 of the light source with the second characteristic wavelength band. Where not described in detail herein, reference is made to the description relating to embodiment one and embodiment two.
Specifically, first, an RGB image corresponding to the multispectral image is obtained, and the RGB image may be reconstructed according to the multispectral image (see implementation one), or the RGB image may be captured for the same scene when the multispectral image is captured (see embodiment two); then, converting the RGB image into a gray image; then determining a target area with the gray value smaller than a threshold value in the gray image; finally, calculating the average value of the multi-spectral response values of the channel of the first characteristic wave band in the multi-channel of the target area, namely the first light source spectral response value Sw 1; the average of the multi-spectral response values of the channel of the second eigenband in the multi-channel of the target area, i.e., the second light source spectral response value Sw2, is calculated.
It should be noted that, on the one hand, since the light source spectrum estimated by the methods of the first and second embodiments is more accurate, the first light source spectral response value Sw1 and the second light source spectral response value Sw2 obtained based on the related descriptions of the first and second embodiments are more accurate, so that the accuracy of the subsequent living body detection result can be improved. On the other hand, the method for estimating the light source spectrum in the first embodiment and the second embodiment is applicable to application scenes of different light sources, so that the in-vivo detection scheme can have better robustness when applied under different light sources.
S340, calculating the product of Dw1/Dw2 and Sw2/Sw1, comparing the product with a threshold k, and if the product is smaller than the threshold k, judging that the human body is a living body.
The multispectral response value of at least one pixel point in the first characteristic wave band w1 is Dw1, and the estimated response value of the first characteristic wave band w1 of the light source spectrum is a first light source spectrum response value Sw 1; the multispectral response value of at least one pixel point in the second characteristic wave band w2 is Dw2, and the estimated response value of the second characteristic wave band w2 of the light source spectrum is a second light source spectrum response value Sw 2.
And calculating (Dw1/Dw2) (Sw2/Sw1) to obtain a product Rw, comparing the product Rw with a threshold k, and obtaining a living body detection result according to the comparison result.
As an implementation manner, first, a ratio of Dw1 to Sw1 is calculated, that is, a reflectance value of at least one pixel in the first characteristic waveband is calculated, which may be denoted as Rw1, Rw1 ═ Dw1/Sw 1; the ratio of Dw2 to Sw2, i.e. the reflectance value of at least one pixel in the second eigenband, can be recorded as Rw2, Rw2 ═ Dw2/Sw 2. Then, the ratio of Rw1 to Rw2 is calculated and may be denoted as Rw, (Rw 1/Rw 2) (Dw1/Dw2) × (Sw2/Sw 1). Therefore, the present implementation may be referred to as a band ratio live detection method.
In some embodiments, if the product Rw is less than the threshold k, the human body is determined to be a living body; and if the product Rw is equal to or larger than the threshold k, judging the human body as a prosthesis. In other embodiments, the comparison condition is adjusted according to the actual accuracy requirement of the in-vivo detection, for example, when the product Rw is equal to the threshold k, the corresponding in-vivo detection result may be set as: the human body is judged to be a living body. This is not a particular limitation of the present application.
On the basis of the embodiment shown in fig. 4, in other embodiments, the step S340 further includes a step of determining the threshold k.
As an implementation, the process of determining the threshold k includes: acquiring first sample reflectivity R1 and second sample reflectivity R2 of a plurality of real skin samples in a first characteristic wave band and a second characteristic wave band, calculating first sample reflectivity ratios of the plurality of real skin samples, namely R1/R2, and determining the maximum value a of the first sample reflectivity ratios in the plurality of real skin samples. In addition, third sample reflectivity R3 and fourth sample reflectivity R4 of a plurality of different types of prosthesis samples in the first characteristic wave band and the second characteristic wave band are obtained, second sample reflectivity ratios R3/R4 of the plurality of prosthesis samples are calculated, and the minimum value b of the second sample reflectivity ratios in the plurality of prosthesis samples is determined. Finally, a threshold k is determined from the maximum a and minimum b.
As a non-limiting example, a first sample reflectivity R1 of each of M (M is an integer greater than 1) different real skin samples at a first characteristic band and a second sample reflectivity R2 at a second characteristic band are collected by a spectrometer, a first sample reflectivity ratio, i.e., R1/R2, of each of the M real skin samples is calculated, and a maximum value a of the first sample reflectivity ratios of the M real skin samples is found. In addition, the same processing method as that of the real skin sample is adopted, the third sample reflectivity R3 of the N (N is an integer larger than 1) different types of prosthesis samples in the first characteristic wave band and the fourth sample reflectivity R4 of the N different types of prosthesis samples in the second characteristic wave band are collected through a spectrometer, the second sample reflectivity ratio of each prosthesis sample in the N prosthesis samples, namely R3/R4, is calculated, and the minimum value b of the second sample reflectivity ratios of the N prosthesis samples is found. And then determining the value range of the threshold k according to a and b. For example, the value range of the threshold k is: (a + b)/2> -k > -min (a, b), where min represents the minimum function. I.e. the threshold k is greater than or equal to the smaller of the two values a and b, and the threshold k is less than or equal to the mean of the two values a and b. The specific value of the threshold k can be determined according to the requirements of practical application, and more living bodies and prostheses can be distinguished through simple design of the threshold k.
Example four
Fig. 5 is a schematic flow chart of an implementation of a living body detection method according to another embodiment of the present application, where the living body detection method in this embodiment can be executed by an electronic device. As shown in fig. 5, the living body detecting method may include steps S410 to S440. It should be understood that, the details of the fourth embodiment are the same as those of the third embodiment, and the description thereof is omitted here for brevity.
S410, acquiring a multispectral image containing human skin, wherein the multispectral image contains at least one pixel.
And S420, determining a first multispectral response value Dw1 and a second multispectral response value Dw2 of the at least one pixel in the first characteristic wave band and the second characteristic wave band respectively.
And S430, respectively acquiring a first light source spectral response value Sw1 and a second light source spectral response value Sw2 of a first characteristic wave band and a second characteristic wave band according to the multispectral image.
S440, calculating a first ratio of Dw1 to Sw1, calculating a second ratio of Dw2 to Sw2, and calculating a third ratio of the first ratio to the second ratio.
Calculating a first ratio of Dw1 to Sw1, that is, calculating a reflectance value of at least one pixel in a first characteristic waveband, where the first ratio can be denoted as Rw1, and Rw1 is Dw1/Sw 1; and calculating a second ratio of the Dw2 to the Sw2, namely, calculating a reflectivity value of at least one pixel in the second characteristic waveband, wherein the second ratio can be recorded as Rw2, and Rw2 is Dw2/Sw 2. Then, a third ratio of Rw1 to Rw2 is calculated, which may be denoted as Rw, Rw1/Rw2 (Dw1/Dw2) (Sw2/Sw 1).
S450, inputting the first ratio, the second ratio and the third ratio into a living body detection model to obtain a living body detection result.
The living body detection model is a trained detection model used for judging whether a human body to be detected is a living body. And inputting the first ratio Rw1, the second ratio Rw2 and the third ratio Rw into a living body detection model, wherein the model can output a classification result that the human body to be detected is a living body or a prosthesis.
In the present embodiment, the living body detection model may include a machine learning or deep learning model. Such as a support vector machine model, a neural network model, a bayesian classifier, or a random forest. The living body detection model is not particularly limited in the present application.
In some implementations, the living body detection model can include a two-classification model, and the two-classification result of the two-classification model includes that the human body to be detected is a living body and the human body to be detected is a prosthesis. For example, [ Rw1, Rw2, Rw1/Rw2] is input into a living body detection model, the output of the model is 1, and the human body to be detected is a living body; the output is 0, which indicates that the human body to be detected is a prosthesis.
In other implementations, the in vivo testing model can include a multi-classification in vivo testing model, in which implementations the in vivo testing model can classify the living body and/or the prosthesis more finely. For example, the prostheses may be further subdivided to distinguish different types or classes of prostheses (e.g., different types or classes of prostheses correspond to prostheses of different materials). The number of classifications of the in-vivo detection model is not particularly limited in the present application.
It should be noted that, before the living body test model is used, a trained living body test model needs to be acquired. As a non-limiting example, the process of acquiring a trained liveness detection model includes: obtaining respective first sample vectors and corresponding labels of a plurality of real skin samples, wherein the first sample vectors comprise three characteristics of a first sample reflectivity value of the real skin samples in a first characteristic waveband, a second sample reflectivity value of the real skin samples in a second characteristic waveband, and a ratio of the first sample reflectivity value to the second sample reflectivity value; obtaining a second sample vector and corresponding labels of a plurality of different types of prosthesis samples, wherein the second sample vector comprises three characteristics of a third sample reflectivity value of the prosthesis sample in a first characteristic waveband, a fourth sample reflectivity value of the prosthesis sample in a second characteristic waveband, and a ratio of the third sample reflectivity value to the fourth sample reflectivity value; and training the living body detection model by using the first sample vector and the corresponding label as well as the second sample vector and the corresponding label as training samples to obtain the trained living body detection model. In this way, the trained in vivo detection model can enable classification of the living body and the prosthesis, that is, the trained in vivo detection model can be used to identify whether the human body to be tested is a living body. It should be understood that, as a non-limiting example, the process of obtaining a first sample reflectance value, a second sample reflectance value, a ratio of the first sample reflectance value to the second sample reflectance value, a third sample reflectance value, a fourth sample reflectance value, and a ratio of the third sample reflectance value to the fourth sample reflectance value may be referred to in the related description of determining the threshold value k.
In the embodiment, the band ratio Rw1/Rw2 is added to the reflectivity characteristics of two characteristic bands to form a three-dimensional characteristic combination vector, namely [ Rw1, Rw2, Rw1/Rw2], so that the dimension of the characteristic is increased. Inputting the feature combination vector into a living body detection model to output a living body detection result, wherein the living body detection result is determined by three features in [ Rw1, Rw2, Rw1/Rw2], and a more accurate result can be obtained.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
An embodiment of the present application further provides a light source spectrum acquisition device. For details of the light source spectrum obtaining apparatus, please refer to the description of the method in the first embodiment and the second embodiment.
Referring to fig. 6, fig. 6 is a schematic block diagram of a light source spectrum acquisition apparatus provided in an embodiment of the present application. The light source spectrum acquisition device includes: an acquisition module 81, a matching module 82, a conversion module 83 and a first calculation module 84.
The acquiring module 81 is configured to acquire a multispectral image and determine a multispectral response value of each pixel in the multispectral image;
the matching module 82 is used for acquiring an RGB image, matching the RGB image with the multispectral image and acquiring a matched RGB image;
a conversion module 83, configured to convert the matched RGB image into a grayscale image;
a first calculating module 84, configured to determine a target region in the grayscale image with a grayscale value smaller than a threshold or a grayscale value smaller than or equal to the threshold, and calculate a light source spectral response value according to a multispectral response value of each pixel corresponding to the target region in the multispectral image.
Optionally, as an implementation manner, the gray value corresponding to each pixel in the gray image is obtained by calculating three channel numerical values of the pixel in the RGB image.
As a non-limiting example of this implementation, the gray value corresponding to each pixel is calculated according to the formula of deta ═ abs (1-G/B) + abs (1-R/B), where R, G and B represent three channel values, i.e., R value, G value, and B value, of each pixel in the RGB image after matching, and abs represents an absolute value function.
Optionally, as an implementation manner, as shown in fig. 7, the light source spectrum obtaining apparatus further includes: a threshold determination module 85.
And the threshold determining module 85 is configured to perform histogram statistics on the grayscale image, and determine a threshold according to an interval parameter of a minimum value interval in a histogram statistical result.
As a non-limiting example of this implementation, the threshold determining module 85 is specifically configured to:
and determining a threshold value according to the interval boundary value and the pixel ratio of the minimum value interval in the histogram statistical result.
Optionally, as an implementation manner, the first calculating module 84 is specifically configured to:
and calculating the average value of the multispectral response values of the pixels corresponding to the target area in the multispectral image to obtain a light source spectral response value.
An embodiment of the present application further provides a device for acquiring a multispectral reflectivity image. The multispectral reflectance image capturing device is not described in detail in the description of the method in the previous embodiment.
Referring to fig. 8, fig. 8 is a schematic block diagram of a multispectral reflectivity image capturing device according to an embodiment of the present disclosure. The multispectral reflectivity image acquisition device comprises: an acquisition module 101, a matching module 102, a conversion module 103, a first calculation module 104 and a second calculation module 105.
An obtaining module 101, configured to obtain a multispectral image, and determine a multispectral response value of each pixel in the multispectral image;
the matching module 102 is configured to obtain an RGB image, match the RGB image with the multispectral image, and obtain a matched RGB image;
the conversion module 103 is configured to convert the matched RGB image into a grayscale image;
a first calculating module 104, configured to determine a target region in the grayscale image with a grayscale value smaller than a threshold or a grayscale value smaller than or equal to the threshold, and calculate a light source spectral response value according to a multispectral response value of each pixel corresponding to the target region in the multispectral image;
the second calculating module 105 is configured to obtain a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the spectral response value of the light source.
Optionally, as an implementation manner, the gray value corresponding to each pixel in the gray image is obtained by calculating according to a multi-channel numerical value of the pixel in the matched RGB image.
As a non-limiting example of this implementation, the gray value corresponding to each pixel is calculated according to the formula deta ═ abs (1-G/B) + abs (1-R/B), where R, G and B represent three channel values of each pixel in the RGB image after matching, i.e., R value, G value, and B value.
Optionally, as an implementation manner, as shown in fig. 9, the multispectral reflectance image obtaining apparatus further includes: a threshold determination module 106.
And a threshold determining module 106, configured to perform histogram statistics on the grayscale image, and determine a threshold according to an interval parameter of a minimum value interval in a histogram statistical result.
As a non-limiting example of this implementation, the threshold determining module 106 is specifically configured to:
and determining a threshold value according to the interval boundary value and the pixel ratio of the minimum value interval in the histogram statistical result.
Optionally, as an implementation manner, the first calculating module 104 is specifically configured to:
and calculating the average value of the multispectral response values of the pixels corresponding to the target area in the multispectral image to obtain a light source spectral response value.
Optionally, as an implementation manner, the second calculating module 105 is specifically configured to:
and dividing the multispectral response value of each pixel in the multispectral image by the light source spectral response value to obtain a multispectral reflectivity image.
Embodiments of the present application also provide an electronic device, as shown in fig. 10, which may include one or more processors 120 (only one shown in fig. 10), a memory 121, and a computer program 122 stored in the memory 121 and executable on the one or more processors 120, for example, a program that acquires a light source spectrum and/or a multispectral reflectance image. The steps in the light source spectral acquisition method and/or the multi-spectral reflectance image acquisition method embodiments may be implemented by one or more processors 120 executing a computer program 122. Alternatively, the one or more processors 120 may implement the functions of the modules/units in the light source spectrum acquisition apparatus and/or the multispectral reflectance image acquisition apparatus embodiment when executing the computer program 122, which is not limited herein.
Those skilled in the art will appreciate that fig. 10 is merely an example of an electronic device and is not intended to limit the electronic device. The electronic device may include more or fewer components than shown, or combine certain components, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
In one embodiment, the Processor 120 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In one embodiment, the storage 121 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 121 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash memory card (flash card), and the like provided on the electronic device. Further, the memory 121 may also include both an internal storage unit and an external storage device of the electronic device. The memory 121 is used to store computer programs and other programs and data required by the electronic device. The memory 121 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the embodiments of the light source spectrum acquisition method and/or the embodiments of the multispectral reflectance image acquisition method.
Embodiments of the present application provide a computer program product, which when run on an electronic device, enables the electronic device to implement the steps in the light source spectrum acquisition method embodiments and/or the multispectral reflectance image acquisition method embodiments.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for obtaining a spectrum of a light source, comprising:
acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image;
acquiring an RGB image, and matching the RGB image with the multispectral image to obtain a matched RGB image;
converting the matched RGB image into a gray image;
and determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
2. A method for acquiring a multispectral reflectance image, comprising:
acquiring a multispectral image, and determining a multispectral response value of each pixel in the multispectral image;
acquiring an RGB image, and matching the RGB image with the multispectral image to obtain a matched RGB image;
converting the matched RGB image into a gray image;
determining a target area with a gray value smaller than a threshold value or a gray value smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image;
and acquiring a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
3. The method as claimed in claim 1 or 2, wherein the gray value corresponding to each pixel in the gray image is calculated according to three-channel numerical values of the pixel in the matched RGB image.
4. The method of claim 1 or 2, wherein after converting the matched RGB image into a grayscale image, further comprising:
and carrying out histogram statistics on the gray level image, and determining a threshold value according to the interval parameter of the minimum numerical value interval in the histogram statistical result.
5. The method of claim 4, wherein determining the threshold value according to the bin parameter of the smallest value bin in the histogram statistics comprises:
and determining a threshold value according to the interval boundary value and the pixel ratio of the minimum value interval in the histogram statistical result.
6. The method according to claim 1 or 2, wherein said calculating a light source spectral response value from a multispectral response value of each pixel corresponding to said target region in said multispectral image comprises:
and calculating the average value of the multispectral response values of the pixels corresponding to the target area in the multispectral image to obtain a light source spectral response value.
7. A light source spectrum acquisition apparatus, comprising:
the acquisition module is used for acquiring a multispectral image and determining a multispectral response value of each pixel in the multispectral image;
the matching module is used for acquiring an RGB image, matching the RGB image with the multispectral image and acquiring a matched RGB image;
the conversion module is used for converting the matched RGB image into a gray image;
the first calculation module is used for determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image, and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image.
8. A multispectral reflectance image capture device, comprising:
the acquisition module is used for acquiring a multispectral image and determining a multispectral response value of each pixel in the multispectral image;
the matching module is used for acquiring an RGB image, matching the RGB image with the multispectral image and acquiring a matched RGB image;
the conversion module is used for converting the matched RGB image into a gray image;
the first calculation module is used for determining a target area of which the gray value is smaller than a threshold value or the gray value is smaller than or equal to the threshold value in the gray image and calculating a light source spectral response value according to the multispectral response value of each pixel corresponding to the target area in the multispectral image;
and the second calculation module is used for acquiring a multispectral reflectivity image according to the multispectral response value of each pixel in the multispectral image and the light source spectral response value.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202110577248.4A 2021-05-26 2021-05-26 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment Active CN113340816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110577248.4A CN113340816B (en) 2021-05-26 2021-05-26 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110577248.4A CN113340816B (en) 2021-05-26 2021-05-26 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113340816A true CN113340816A (en) 2021-09-03
CN113340816B CN113340816B (en) 2023-10-27

Family

ID=77471547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110577248.4A Active CN113340816B (en) 2021-05-26 2021-05-26 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113340816B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247840A1 (en) * 2021-05-26 2022-12-01 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056237A1 (en) * 1996-11-19 2001-12-27 Cane Michael Roger Method of and apparatus for investigating tissue histology
JP2004280591A (en) * 2003-03-17 2004-10-07 Ntt Data Corp Multi-spectral image processor, multi-spectral image processing method, and program for execution by computer
CN101499167A (en) * 2009-03-17 2009-08-05 杨星 Graying method based on spectral reflection characteristics and three-color imaging principle
CN105338326A (en) * 2015-11-26 2016-02-17 南京大学 Embedded high-space and high-spectral resolution video acquisition system
CN108520488A (en) * 2018-04-10 2018-09-11 深圳劲嘉集团股份有限公司 A kind of method and electronic equipment for reconstructing spectrum and being replicated

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056237A1 (en) * 1996-11-19 2001-12-27 Cane Michael Roger Method of and apparatus for investigating tissue histology
JP2004280591A (en) * 2003-03-17 2004-10-07 Ntt Data Corp Multi-spectral image processor, multi-spectral image processing method, and program for execution by computer
CN101499167A (en) * 2009-03-17 2009-08-05 杨星 Graying method based on spectral reflection characteristics and three-color imaging principle
CN105338326A (en) * 2015-11-26 2016-02-17 南京大学 Embedded high-space and high-spectral resolution video acquisition system
CN108520488A (en) * 2018-04-10 2018-09-11 深圳劲嘉集团股份有限公司 A kind of method and electronic equipment for reconstructing spectrum and being replicated

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247840A1 (en) * 2021-05-26 2022-12-01 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device

Also Published As

Publication number Publication date
CN113340816B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN113340817B (en) Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
JP5496509B2 (en) System, method, and apparatus for image processing for color classification and skin color detection
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
JP3767541B2 (en) Light source estimation apparatus, light source estimation method, imaging apparatus, and image processing method
Kaya et al. Towards spectral estimation from a single RGB image in the wild
CN108520488B (en) Method for reconstructing spectrum and copying spectrum and electronic equipment
Benezeth et al. Background subtraction with multispectral video sequences
US10641658B1 (en) Method and system for hyperspectral light field imaging
CN112580433A (en) Living body detection method and device
KR20140058674A (en) System and method for digital image signal compression using intrinsic images
WO2023273411A1 (en) Multispectral data acquisition method, apparatus and device
CN113340816A (en) Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
CN113297977B (en) Living body detection method and device and electronic equipment
CN113297978B (en) Living body detection method and device and electronic equipment
Conni et al. The Effect of Camera Calibration on Multichannel Texture Classification.
CN110675366B (en) Method for estimating camera spectral sensitivity based on narrow-band LED light source
JP7334509B2 (en) 3D geometric model generation system, 3D geometric model generation method and program
WO2021220444A1 (en) Skin evaluation coefficient learning device, skin evaluation index estimation device, skin evaluation coefficient learning method, skin evaluation index estimation method, focus value acquisition method, and skin smoothness acquisition method
KR101713293B1 (en) Color compensation device for color face recognition
CN118018863A (en) Method and device for white balancing image, computer system and storage medium
Yao et al. Shadow removal from images using an improved single-scale retinex color restoration algorithm
CN116724564A (en) Image sensor, image data acquisition method, and imaging apparatus
Wannous et al. Design of a customized pattern for improving color constancy across camera and illumination changes
Gomez et al. Computational colour constancy by using two learning machines: Contributions to neural networks and ridge regression for illuminant estimation
Xiong et al. Modeling the Uncertainty in Inverse Radiometric Calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant