CN112990244A - Method for distinguishing infrared image data and visible light image data - Google Patents
Method for distinguishing infrared image data and visible light image data Download PDFInfo
- Publication number
- CN112990244A CN112990244A CN201911298648.0A CN201911298648A CN112990244A CN 112990244 A CN112990244 A CN 112990244A CN 201911298648 A CN201911298648 A CN 201911298648A CN 112990244 A CN112990244 A CN 112990244A
- Authority
- CN
- China
- Prior art keywords
- image data
- image
- infrared
- visible light
- green
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000003086 colorant Substances 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Abstract
The invention provides a method for distinguishing infrared image data and visible light image data, which comprises the following steps: s1, acquiring initial image data of two images; s2, traversing image pixels; s3, calculating the characteristic values of the two images according to a formula, wherein the formula is as follows: first the set of pixels in the image m 1: a. } m1 ═ n1, n2,. }, each pixel n containing three color values of red, green and blue, denoted r, g, b, respectively, n ═ r, g, b, the average of green g is calculated in m 1:this value is the eigenvalue; and S4, comparing the characteristic values of the two images, and distinguishing infrared image data from visible light image data according to the characteristic values.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method for distinguishing infrared image data and visible light image data.
Background
With the continuous development of science and technology, especially the development of computer vision technology, image processing has been widely used, for example, in the face recognition technology, image processing is very important, and the present face recognition technology needs to use a hardware scheme with two cameras, that is, needs a processor to connect two cameras, one of which is a visible light camera and the other is an infrared camera. Commonly used prior art terms include:
infrared picture: image data shot by an infrared camera;
visible light picture: image data shot by a visible light camera;
ISP: the Image Signal Processor (ISP) module is a complete camera subsystem that includes a variety of advanced image processing techniques designed to meet the requirements of high performance cameras.
In order to achieve high quality imaging, an Image Signal Processor (ISP) is required to process the raw image data. The current scheme for connecting two cameras comprises:
in the scheme 1, a camera with an ISP is adopted;
in the scheme 2, one ISP is used for processing two cameras in a time-sharing manner;
the cost is increased by adopting the scheme 1, because an ISP is built in the CPU. However, the solution 2 has a problem that the ISP cannot distinguish which camera outputs the data. Therefore, infrared data can be treated as visible light data at an application layer, and confusion is caused.
As shown in fig. 1, an x1830 Chip manufactured by beijing plus integrated circuit corporation is taken as an example, fig. 1 is a schematic diagram of the connection between the Chip and an Image Sensor (camera), and an ISP (ISP) is built in the x1830 Chip and connected to an Image Sensor (CMOS Image Sensor), which is a common case.
As shown in fig. 2, when the scheme 1 is adopted, two very expensive cameras with ISPs are required, and the ISPs built in the CPU are wasted.
As shown in fig. 3, when the scheme 2 is adopted, two common cameras are mutually connected to an ISP built in a CPU through time division and then output to a DRAM (memory).
As shown in fig. 4, with time-sharing interoperability, for example, a first frame may output data of a first camera, a second frame may output data of a second camera, a third frame may output data of the first camera, and a fourth frame may output data of the second frame, and the above steps may be repeated.
Since the data received by the ISP is not differentiated in the process, that is, the data is processed in the same way regardless of the data, and a certain frame of data may be discarded in the process, the arrangement of the output image data is not strictly two cameras alternating, and thus the application layer processing may be disordered.
Disclosure of Invention
In order to solve the problems in the prior art, the present invention aims to: the method has the advantage that the data shot by the visible light camera or the data shot by the infrared camera can be distinguished after the image data is received by using lower cost.
The method is combined with the principle of an image sensor, the difference value of the wavelengths of visible light and infrared light is judged, infrared or visible light data can be judged simply from image data, the method is simple, and the cost is low.
Specifically, the present invention provides a method of distinguishing infrared image data from visible light image data, the method comprising:
s1, acquiring initial image data of two images;
s2, traversing image pixels;
s3, calculating the characteristic values of the two images according to a formula 1, wherein the formula 1 is as follows:
first the set of pixels in the image m 1: a. } m1 ═ n1, n2,. }, each pixel n containing three color values of red, green and blue, denoted r, g, b, respectively, n ═ r, g, b, the average of green g is calculated in m 1:
and S4, comparing the characteristic values of the two images, and distinguishing infrared image data from visible light image data according to the characteristic values.
In S3, the feature values of the two images are calculated, and may also be calculated according to formula 2: let x be r-b, since red r is the maximum in the infrared image, this difference reflects the relative loss of green g, and then calculate the average of x:
The characteristic value for comparing the two images in S4 is to distinguish the infrared image data from the visible image data by the degree of green filtering.
Comparing the infrared characteristic values of the two images in the step S4, including: comparing the first image with the second image, and when the characteristic value of the first image is greater than that of the second image, the first image is an infrared image; and when the characteristic value of the first image is smaller than the characteristic of the second image, the second image is an infrared image.
The initial image data of the two images obtained in S1 are two images generated by shooting the same scene with two cameras.
One of the two cameras is a visible light camera, and the other one is an infrared camera.
The infrared camera is provided with a layer of infrared filter in front, most of infrared invisible light passes through the infrared filter, and visible light sensitive to human eyes is reflected out.
According to the fact that the red, green and blue are filtered most and then blue and the red is least among the three colors of red, green and blue, the tristimulus values of red, green and blue of any pixel in the image satisfy the following conditions: r > b > g.
The application has the advantages that: the method has low cost and simple process. The invention can not cause the processing of the application layer to be disordered. The infrared image data and the visible light image data are distinguished through a simple method, and compared with the prior art, the cost is lower.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
Fig. 1 is a schematic diagram of a chip and an image sensor according to the prior art.
Fig. 2 is a schematic diagram of a prior art solution using a camera with an ISP.
Fig. 3 is a schematic diagram of a prior art scheme for processing two cameras in a time-sharing manner by using one ISP.
Fig. 4 is a schematic diagram of two common cameras which are mutually connected to an ISP built in a CPU by time division and then output to a DRAM.
Fig. 5 is a simple schematic block diagram of the method of the present invention.
Fig. 6 is a schematic diagram of an image sensor according to the present invention.
Fig. 7 is a schematic diagram of a sensor unit sensitive to light in a certain wavelength range, and a plurality of sensor units forming an array to form an image sensor.
Fig. 8 is a schematic diagram of color information of a certain point collected after color light according to the present invention passes through a sensor array.
Fig. 9 is a schematic diagram of the difference between the visible light camera and the infrared camera of the present invention.
Fig. 10 is an example of an infrared image in an embodiment of the present invention.
Fig. 11 is an example of a visible light image according to an embodiment of the present invention.
Fig. 12 is a schematic diagram of the red, green and blue image data corresponding to fig. 10.
Fig. 13 is a schematic diagram of the red, green and blue image data corresponding to fig. 11.
Fig. 14 is a detailed flow chart of a method to which the present invention relates.
Detailed Description
As shown in fig. 5, in order to solve the problems in the prior art, after receiving image data, the method of the present application distinguishes whether the image data is captured by a visible light camera or an infrared camera through calculation.
First, the principle of the image sensor is seen. As shown in fig. 6, one sensor unit may be sensitive to light of a certain wavelength range, and a plurality of sensor units form an array, forming an image sensor.
Then, as shown in fig. 7, each point in the array is converted into a digital signal by the ADC, and finally becomes image data.
The visible light comprises three primary colors of red, green and blue, and the pixel array is provided with sensitive units with different wavelengths, so that the purpose of detecting the three primary colors is achieved. As shown in fig. 8, after the colored light passes through the sensor array, the color information of a certain point can be collected.
Knowing the principle of visible light imaging, it is necessary to know what the two cameras are different. One is a visible light camera, and the other is an infrared camera, as shown in fig. 9: the main difference between the visible-infrared image sensor and the visible-light sensor is the light-transmitting film in front of the lens. A purple anti-reflection film is arranged in front of the lens of the visible light camera, and the film can ensure that most of visible light passes through, and light rays in invisible light areas except purple or red can be reflected out; for the infrared camera, an infrared filter is arranged in front of the infrared camera, the action of the filter on an antireflection film is almost opposite, most of infrared invisible light is ensured to pass through, and green visible light and other visible light sensitive to human eyes are reflected out.
From the above principle, it can be seen that the acquired image data includes image data of different wavelengths.
The red, green and blue information of two real images are compared as shown in fig. 10 and 11. The two images are then opened by the tool and their red, green and blue image data viewed, as shown in fig. 12, 13. As shown in fig. 12, starting from the 00 offset, the first byte is blue, the second byte is green, the third byte is red, the fourth byte is useless, each four bytes represents a pixel, and then the image is cyclically composed in this order. As can be seen from the figure, the green values are all generally 00, indicating that the green light has been mostly filtered out.
When the data of the visible light in fig. 13 is compared, it can be found that the three colors of red, green and blue are substantially balanced.
The method has the main basic principle that the degree of filtering green in the two images is identified to judge which of the two images is infrared and which is visible. In addition, according to the principle of the filter, among the three colors of red, green and blue, the most green color is filtered, the next blue color is filtered, and the least red color is filtered, so the three color values of red, green and blue of the pixel in fig. 1 must mostly satisfy:
r>b>g
this data was obtained through a number of tests.
The following is a calculation method for extracting feature values from an image:
characteristic value 1
Set of pixels in picture m 1:
m1={n1,n2,....}
each pixel n comprises three color values of r, g and b
n={r,g,b}
Calculate the average of the green in m1
This value is the characteristic value 0, denoted f0
This eigenvalue reflects the relative intensity of the green color.
Tests show that the value is not good enough in robustness, because the image data is influenced by ambient light, the meaning represented by an absolute value is not as accurate as that represented by a relative value, and therefore the calculation method is further modified:
x=r-b
since red is the maximum in the infrared image, this difference reflects the relative loss of green, and then the average of x is calculated
This value is the characteristic value 1, denoted f1
Now comparing the images p1 and p2, first, the characteristic values of p1 and p2 are calculated, respectively: p1f1 and p2f1 are found by a large amount of data tests that two pictures can be well distinguished by using the characteristic value 1. That is:
the p1 map is an infrared image when p1f1> p2f 1.
And vice versa.
Specifically, a flow chart of a method embodying the present invention is shown in fig. 14.
A method of distinguishing infrared image data from visible light image data, the method comprising:
s1, acquiring initial image data of two images;
s2, traversing image pixels;
s3, calculating characteristic values of the two images according to a formula;
and S4, comparing the characteristic values of the two images, and distinguishing infrared image data from visible light image data according to the characteristic values.
Wherein, the formula in S3 may be:
calculating the eigenvalues according to equation 1:
first the set of pixels in the image m 1: a. } m1 ═ n1, n2,. }, each pixel n containing three color values of red, green and blue, denoted r, g, b, respectively, n ═ r, g, b, the average of green g is calculated in m 1:
calculating the eigenvalues according to equation 2:
let x be r-b, since red r is the maximum in the infrared image, this difference reflects the relative loss of green g, and then calculate the average of x:
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A method of distinguishing infrared image data from visible image data, the method comprising:
s1, acquiring initial image data of two images;
s2, traversing image pixels;
s3, calculating the characteristic values of the two images according to a formula, wherein the formula is as follows:
first the set of pixels in the image m 1: a. } m1 ═ n1, n2,. }, each pixel n containing three color values of red, green and blue, denoted r, g, b, respectively, n ═ r, g, b, the average of green g is calculated in m 1:
and S4, comparing the characteristic values of the two images, and distinguishing infrared image data from visible light image data according to the characteristic values.
2. The method of claim 1, wherein the characteristic values of the two images are calculated in S3, and further calculated according to formula 2: let x be r-b, since red r is the maximum in the infrared image, this difference reflects the relative loss of green g, and then calculate the average of x:
3. The method of claim 1, wherein the comparing of the infrared image data and the visible light image data in S4 is performed by green filtering to distinguish the infrared image data from the visible light image data.
4. The method of claim 1 or 2, wherein comparing the ir feature values of the two images in S4 comprises: comparing the first image with the second image, and when the characteristic value of the first image is greater than that of the second image, the first image is an infrared image; and when the characteristic value of the first image is smaller than the characteristic of the second image, the second image is an infrared image.
5. The method of claim 1, wherein the initial image data of the two images obtained in S1 are two images of a scene captured by two cameras.
6. The method of claim 5, wherein the two cameras are one visible light camera and one infrared camera.
7. The method of claim 6, wherein said infrared camera is preceded by an infrared filter through which most of the infrared invisible light passes and visible light to which the human eye is sensitive is reflected.
8. A method of distinguishing between infrared image data and visible light image data as claimed in claim 1 wherein the tristimulus values of red, green and blue for any pixel in the image are such that, based on being filtered for the three colors red, green and blue the most, followed by blue and the least red: r > b > g.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911298648.0A CN112990244A (en) | 2019-12-17 | 2019-12-17 | Method for distinguishing infrared image data and visible light image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911298648.0A CN112990244A (en) | 2019-12-17 | 2019-12-17 | Method for distinguishing infrared image data and visible light image data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112990244A true CN112990244A (en) | 2021-06-18 |
Family
ID=76341898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911298648.0A Pending CN112990244A (en) | 2019-12-17 | 2019-12-17 | Method for distinguishing infrared image data and visible light image data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112990244A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390249A (en) * | 2021-12-14 | 2022-04-22 | 北京百度网讯科技有限公司 | Video processing method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104079908A (en) * | 2014-07-11 | 2014-10-01 | 上海富瀚微电子股份有限公司 | Infrared and visible light image signal processing method and implementation device thereof |
CN105512667A (en) * | 2014-09-22 | 2016-04-20 | 中国石油化工股份有限公司 | Method for fire identification through infrared and visible-light video image fusion |
US20180247127A1 (en) * | 2015-09-25 | 2018-08-30 | Board Of Regents, The University Of Texas System | Classifying images and videos |
-
2019
- 2019-12-17 CN CN201911298648.0A patent/CN112990244A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104079908A (en) * | 2014-07-11 | 2014-10-01 | 上海富瀚微电子股份有限公司 | Infrared and visible light image signal processing method and implementation device thereof |
CN105512667A (en) * | 2014-09-22 | 2016-04-20 | 中国石油化工股份有限公司 | Method for fire identification through infrared and visible-light video image fusion |
US20180247127A1 (en) * | 2015-09-25 | 2018-08-30 | Board Of Regents, The University Of Texas System | Classifying images and videos |
Non-Patent Citations (1)
Title |
---|
庞小平: "《遥感制图与应用》", 30 June 2016, 测绘出版社, pages: 64 - 67 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390249A (en) * | 2021-12-14 | 2022-04-22 | 北京百度网讯科技有限公司 | Video processing method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107977940B (en) | Background blurring processing method, device and equipment | |
KR101946436B1 (en) | A photographing method, a photographing apparatus, | |
US20150278996A1 (en) | Image processing apparatus, method, and medium for generating color image data | |
CN111062378B (en) | Image processing method, model training method, target detection method and related device | |
CN108055452A (en) | Image processing method, device and equipment | |
CN107481186B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN109685853B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN108259770B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN107959778A (en) | Imaging method and device based on dual camera | |
CN107948520A (en) | Image processing method and device | |
CN108712608A (en) | Terminal device image pickup method and device | |
EP4109894A1 (en) | Image sensor and image sensitization method | |
US10395093B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
CN107704798B (en) | Image blurring method and device, computer readable storage medium and computer device | |
CN108401110B (en) | Image acquisition method and device, storage medium and electronic equipment | |
CN108024054A (en) | Image processing method, device and equipment | |
CN107563979B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment | |
CN107563329B (en) | Image processing method, image processing device, computer-readable storage medium and mobile terminal | |
CN108052883B (en) | User photographing method, device and equipment | |
US9270984B2 (en) | Camera with dust checking function | |
CN107454335A (en) | Image processing method, device, computer-readable recording medium and mobile terminal | |
CN112990244A (en) | Method for distinguishing infrared image data and visible light image data | |
CN112991468A (en) | Method for identifying infrared picture and visible light picture by using wavelength information | |
CN107786857B (en) | A kind of image restoring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |