CN115002356A - Night vision method based on digital video photography - Google Patents

Night vision method based on digital video photography Download PDF

Info

Publication number
CN115002356A
CN115002356A CN202210846085.XA CN202210846085A CN115002356A CN 115002356 A CN115002356 A CN 115002356A CN 202210846085 A CN202210846085 A CN 202210846085A CN 115002356 A CN115002356 A CN 115002356A
Authority
CN
China
Prior art keywords
processor
image
standard
pixel
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210846085.XA
Other languages
Chinese (zh)
Inventor
刘刚
邱波
李小辉
曾文琪
苏茹
邱枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ACT Industrial Co Ltd
Original Assignee
Shenzhen ACT Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ACT Industrial Co Ltd filed Critical Shenzhen ACT Industrial Co Ltd
Priority to CN202210846085.XA priority Critical patent/CN115002356A/en
Publication of CN115002356A publication Critical patent/CN115002356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a night vision method based on digital video photography, which comprises a light sensing module, a night vision camera shooting end, an image analysis module, an imaging module, a controller, a digital video storage module and a display module, wherein the brightness of an image is divided, a third processor obtains the pixel value of the image at each standard brightness end, a fourth processor reads imaging data stored in an image data reference database and calculates the output pixel of the image, the brightness of the image in an overexposure state is weakened, the brightness of the image in an underexposure state is compensated, and the situation that the content of the image is unclear due to the fact that the intensity of a light source is too strong or the light source is weak in night vision camera shooting is avoided; the invention divides the final picture into grids with the same area as much as possible through the digital video storage module, records the position of each grid, and records the color of each grid as detailed as possible to store as a binary file, thereby avoiding the situation that the night vision camera shooting picture is not clearly displayed.

Description

Night vision method based on digital video photography
Technical Field
The invention belongs to the field of night vision for photography, relates to a night vision technology based on digital video photography, and particularly relates to a night vision method based on digital video photography.
Background
The prior night vision method based on digital video photography has the problems that when a strong light source or a weak light source appears in a detection area during the detection process of a night target, the shooting result is greatly influenced, and the image display definition under night vision shooting is not high.
Disclosure of Invention
The invention aims to provide a night vision method based on digital video photography.
The technical problem to be solved by the invention is as follows:
(1) how to solve the problem that the night vision shooting result is influenced by over-strong light source intensity or weak light source;
(2) how to show night vision camera shooting results more clearly.
The purpose of the invention can be realized by the following technical scheme:
the night vision method based on digital video photography comprises a photosensitive module, a night vision camera shooting end, an image analysis module, an imaging module, a controller, a digital video storage module and a display module;
the night vision camera comprises a second processor, a laser lighting module and a camera unit, wherein the laser lighting module is used for emitting infrared laser to project infrared rays to a camera environment, the camera unit is used for receiving the infrared rays reflected by an object and generating an image, the second processor transmits the image to an image analysis module, the image analysis module is used for analyzing the image, the image analysis module comprises a third processor, and the third processor analyzes the image after receiving the image transmitted by the second processor to obtain an exposure ratio Gn of the image;
the specific analysis steps are as follows:
c1: if G-20% and Gn are more than or equal to G +20%, indicating that the camera shooting picture is normally exposed, generating exposure normal data by a third processor and transmitting the exposure normal data to an imaging module, wherein the imaging module is used for processing the image and forming a final picture, the imaging module comprises a fourth processor, an imaging unit and an image data reference database, and the fourth processor processes the exposure normal data after receiving the exposure normal data transmitted by the third processor to obtain a pixel G of an output image;
the fourth processor generates normal image data according to pixels g of an output image, the fourth processor respectively transmits the normal image data to the imaging unit and the image data reference database, the imaging unit generates a final image after receiving the normal image data transmitted by the fourth processor, the fourth processor transmits the final image to the controller, and the image data reference database stores the normal image data in the image data reference database for storage;
c2: if 0< Gn < G-20%, it indicates that the image is underexposed, at this time, the third processor generates underexposed data and transmits it to the imaging module, and the fourth processor receives the underexposed data transmitted by the third processor and processes the underexposed data, and the specific underexposed data processing steps are as follows:
e1: the fourth processor acquires the image data in the underexposure data and the pixel value Pne of the image at each standard brightness level Zn;
e2: performing area division, namely dividing the area into s multiplied by s standard area sections with equal area, and sequentially dividing the standard area sections to obtain all standard area sections Kze, wherein ze is more than or equal to 1 and is less than or equal to s ^ 2;
e3: the fourth processor respectively obtains the line pixel Kzie and the column pixel Kzje of each standard region segment Kze, s ^2 is more than or equal to 1 and less than or equal to zie, and s ^2 is more than or equal to 1 and less than or equal to zje;
e4: the fourth processor sequentially acquires the pixel quantities Pdz and Pez in each standard region segment Kz at the standard luminance level of n =0 and n = 256;
e5: the fourth processor reads the imaging data stored in the image data reference database and acquires a pixel value Pn of the image at each standard luminance level Zn, a row pixel Kzi of each standard region segment Kz, a column pixel Kzj of each standard region segment Kz, and a pixel contrast Oz within each standard region segment Kz;
e6: using formulas
Figure 695878DEST_PATH_IMAGE001
Obtaining a bias degree δ 1 within each standard region segment Kze;
e7: using formulas
Figure 439843DEST_PATH_IMAGE002
Acquiring a pixel ge of an output image;
in the formula
Figure 552156DEST_PATH_IMAGE003
The fourth processor generates compensation image data according to the pixel ge of the output image and transmits the compensation image data to the imaging unit, the imaging unit generates a final picture after receiving the compensation image data, and the fourth processor transmits the final picture to the controller;
c3: if Gn +20%<Gn, representing overexposure of the camera picture, generating overexposure data by the third processor and transmitting the overexposure data to the imaging module, and using a formula for the overexposure data by the fourth processor
Figure 339852DEST_PATH_IMAGE004
Processing is carried out to obtain a pixel gk of an output image;
δ 2 is a deviation value degree in each standard region segment Kze, Kzik is a row pixel of each standard region segment, Kzjk is a column pixel of each standard region segment, Oz is a pixel contrast in imaging data stored in an image data reference database and in each standard region segment Kz, the fourth processor generates weakened image data according to a pixel ge of an output image and transmits the weakened image data to the imaging unit, the imaging unit generates a final picture after receiving the weakened image data, and the fourth processor transmits the final picture to the controller.
Further, the specific steps of the third processor for obtaining the analysis image exposure ratio Gn are as follows:
b1: dividing the brightness of the image into 256 standard brightness levels, wherein the standard brightness level is one standard brightness level from 0 to 1, and repeating the steps to obtain all the standard brightness levels Zn, wherein n =1.. 256;
b2: the third processor acquires a pixel value Pn of an image at each standard brightness level Zn;
b3: presetting an exposure ratio threshold value as G;
b4: using a formula
Figure 537615DEST_PATH_IMAGE005
Acquiring an exposure ratio Gn of the image;
b5: the comparative analysis is performed by comparing the preset exposure threshold value G and the exposure ratio Gn of the image.
Further, the specific exposure normal data processing steps are as follows:
d1: the fourth processor acquires the pixel value Pn of the image data and the image at each standard brightness level Zn in the exposure normal data;
d2: performing region division, dividing the region into s multiplied by s standard region sections with equal area, and sequentially dividing the standard region sections to obtain all the standard region sections Kz, wherein z is more than or equal to 1 and is less than or equal to s ^ 2;
d3: the fourth processor respectively obtains row pixels Kzi and column pixels Kzj of each standard region Kz, wherein zi is more than or equal to 1 and less than or equal to s ^2, and zj is more than or equal to 1 and less than or equal to s ^ 2;
d4: the fourth processor sequentially obtains the pixel quantities Pd and Pe in each standard regional section Kz when the standard brightness level is n =0 and n = 256;
d5: using formulas
Figure 85271DEST_PATH_IMAGE006
Obtaining the pixel contrast in each standard region segment KzOz;
D6: using formulas
Figure 848828DEST_PATH_IMAGE007
Acquiring a pixel g of an output image;
f (Kzi, Kzj) is the pixels of the source image within each standard region segment.
Further, the photosensitive module includes a first processor and a photosensitive sensor, and the photosensitive module is configured to acquire an illumination intensity around a shooting environment, and includes the following specific steps:
the method comprises the following steps: presetting an illumination threshold value as L;
step two: performing area division, namely dividing an area into 9 standard area sections from far to near in distance from left to right in the direction, wherein the leftmost standard area section is in the direction of the standard area section, the farthest standard area section is in the distance of the leftmost standard area section, and the rest is repeated to obtain all the standard area sections Sn, and n =1.. 9;
step three: and time division is carried out, the time is divided into 24 standard time periods, and the standard time periods are divided from 00: 00 to 01: 00 is a standard time period, and all the standard time periods Ta are obtained by analogy in sequence, wherein a =1.. 24;
step four: respectively placing a photosensitive sensor in each standard area section Sn, and collecting the highest illumination intensity La of the current area within the standard time period Ta;
step five: comparing and analyzing the highest illumination intensity La of the current region in the standard time period Ta with an illumination threshold L respectively, wherein the specific analysis steps are as follows:
a1: if La < L, the illumination intensity of the current area in the current standard time period Ta is weak or no visible light exists, and the first processor generates a night vision instruction and transmits the night vision instruction to the night vision camera terminal;
a2: if La is larger than or equal to L, the illumination intensity of the current area in the current standard time period Ta is high, and the first processor does not perform any processing.
Further, the digital video storage module comprises a fifth processor and a binary file generator, the digital video storage module is used for converting the picture data into an electric signal for storage, the final picture transmitted to the digital video storage module by the controller is received by the digital video storage module, and the final picture is converted into the electric signal for storage according to a certain rule;
the specific transformation rules are as follows:
r1: carrying out grid division, and dividing the final picture into 50000 grids in equal area;
r2: color division is carried out, and the colors are divided into 162 colors;
r3: the fifth processor respectively acquires the color type of each grid and the position of each grid on the final picture;
the fifth processor records the color type of each grid and the position of each grid on the final picture in a digital document and transmits the digital document to the binary file generator, the binary file generator receives the digital document transmitted by the fifth processor, and the binary file generates the digital document into a binary file in a binary form for storage and deletes the original digital document after receiving the digital document transmitted by the fifth processor.
Further, the display module reads and displays the camera shooting result, and the display module comprises a sixth processor and a camera shooting display unit.
The invention has the beneficial effects that:
(1) according to the invention, the brightness of the image is divided, the third processor obtains the pixel value of the image at each standard brightness end, then the fourth processor reads imaging data stored in the image data reference database and obtains the information of the image, the output pixel of the image is calculated, the brightness of the image in an overexposure state is weakened, the brightness of the image in an underexposure state is compensated, and the situation that the content of the image is unclear due to the fact that the light source intensity is too strong or the light source is weak in night vision shooting is avoided;
(2) the digital video storage module divides the final picture into grids with the areas as much as possible, records the position of each grid, and records the color division of each grid as detailed as possible for storage in the form of binary files, thereby avoiding the situation that the night vision photographic picture is not clearly displayed.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a system block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a night vision system for digital video photography includes a light sensing module, a night vision camera, an image analysis module, an imaging module, a controller, a digital video storage module, and a display module.
The photosensitive module comprises a first processor and a photosensitive sensor, and is used for acquiring the illumination intensity around a shooting environment, and the specific steps are as follows:
the method comprises the following steps: presetting an illumination threshold value as L;
step two: dividing the region into 9 standard region sections from far to near in distance from left to right in the direction, and repeating the steps to obtain all the standard region sections Sn, wherein n =1.. 9;
step three: and time division is carried out, the time is divided into 24 standard time periods, and the standard time periods are divided from 00: 00 to 01: 00 is a standard time period, and all the standard time periods Ta are obtained by analogy in sequence, wherein a =1.. 24;
step four: respectively placing a photosensitive sensor in each standard area section Sn, and collecting the highest illumination intensity La of which the duration of the current area reaches Ta/4 in a standard time period Ta;
step five: comparing and analyzing the highest illumination intensity La of the current region in the standard time period Ta with an illumination threshold L respectively, wherein the specific analysis steps are as follows:
a1: if La < L, the illumination intensity of the current area is weak or no visible light exists in the current standard time period Ta, and the first processor generates a night vision instruction and transmits the night vision instruction to the night vision camera terminal;
a2: if La is larger than or equal to L, the illumination intensity of the current area in the current standard time period Ta is high, and the first processor does not perform any processing.
The night vision camera shooting end is used for obtaining images in a night vision environment and comprises a second processor, a laser lighting module and a camera shooting unit, the second processor receives a night vision instruction transmitted to the night vision camera shooting end by the first processor, and the second processor generates a starting instruction and transmits the starting instruction to the laser lighting module after receiving the night vision instruction transmitted by the first processor.
The laser lighting module is used for emitting infrared laser to project infrared rays to a shooting environment, the laser lighting module receives a starting instruction transmitted by the second processor, and the laser lighting module projects infrared rays to a current shooting area after receiving the starting instruction transmitted by the second processor. The camera shooting unit is used for receiving infrared rays reflected by an object and generating an image, the second processor transmits the image to the image analysis module, the image analysis module is used for analyzing the image, the image analysis module comprises a third processor, the third processor receives the image transmitted by the second processor, the third processor analyzes the image after receiving the image transmitted by the second processor, and the specific analysis steps are as follows:
b1: dividing the brightness, namely dividing the brightness of the image into 256 standard brightness levels, wherein the standard brightness level 1 is one standard brightness level, and repeating the steps to obtain all the standard brightness levels Zn, wherein n =1.. 256;
b2: the third processor acquires a pixel value Pn of an image at each standard brightness level Zn;
b3: presetting an exposure ratio threshold value as G;
b4: using formulas
Figure 89316DEST_PATH_IMAGE005
Acquiring an exposure ratio Gn of the image;
b5: comparing and analyzing the preset exposure threshold G and the exposure ratio Gn of the image, wherein the specific analysis steps are as follows:
c1: if G-20% and Gn are more than or equal to G +20%, it means that the shot picture is normally exposed, at this time, the third processor generates exposure normal data, the third processor transmits the exposure normal data to the imaging module, the imaging module is used for processing the image and forming a final picture, the imaging module comprises a fourth processor, an imaging unit and an image data reference database, the fourth processor receives the exposure normal data transmitted by the third processor, the fourth processor processes the exposure normal data after receiving the exposure normal data transmitted by the third processor, and the specific exposure normal data processing steps are as follows:
d1: the fourth processor acquires image data in exposure normal data and a pixel value Pn of an image at each standard brightness level Zn;
d2: performing region division, namely dividing the region into s multiplied by s standard region sections with equal area, and sequentially dividing the standard region sections to obtain all standard region sections Kz, wherein z is more than or equal to 1 and less than or equal to s ^ 2;
d3: the fourth processor respectively obtains row pixels Kzi and column pixels Kzj of each standard region Kz, wherein zi is more than or equal to 1 and less than or equal to s ^2, and zj is more than or equal to 1 and less than or equal to s ^ 2;
d4: the fourth processor sequentially obtains the pixel quantities Pd and Pe in each standard regional section Kz when the standard brightness level is n =0 and n = 256;
d5: using formulas
Figure 256599DEST_PATH_IMAGE006
Acquiring the pixel contrast Oz in each standard region Kz;
d6: using formulas
Figure 342367DEST_PATH_IMAGE007
Acquiring a pixel g of an output image;
f (Kzi, Kzj) is pixels of the source image in each standard region segment, the fourth processor generates normal image data according to pixels g of the output image, the fourth processor respectively transmits the normal image data to the imaging unit and the image data reference database, the imaging unit receives the normal image data transmitted by the fourth processor, the imaging unit receives the normal image data and generates a final picture, the fourth processor transmits the final picture to the controller, the image data reference database receives the normal image data transmitted by the fourth processor, and the image data reference database stores the normal image data in the image data reference database for storage after receiving the normal image data.
C2: if Gn is 0< Gn < G-20%, the image is underexposed, the third processor generates underexposed data, the third processor transmits the underexposed data to the imaging module, the fourth processor receives the underexposed data transmitted by the third processor, the fourth processor processes the underexposed data after receiving the underexposed data transmitted by the third processor, and the specific underexposed data processing steps are as follows:
e1: the fourth processor acquires the image data in the underexposure data and the pixel value Pne of the image at each standard brightness level Zn;
e2: performing area division, namely dividing the area into s multiplied by s standard area sections with equal area, and sequentially dividing the standard area sections to obtain all standard area sections Kze, wherein ze is more than or equal to 1 and is less than or equal to s ^ 2;
e3: the fourth processor respectively obtains the row pixel Kzie and the column pixel Kzje of each standard region segment Kze, s ^2 is more than or equal to 1 and less than or equal to zie, and s ^2 is more than or equal to 1 and less than or equal to zje;
e4: the fourth processor sequentially acquires the pixel quantities Pdz and Pez in each standard region segment Kz at the standard luminance level of n =0 and n = 256;
e5: the fourth processor reads the imaging data stored in the image data reference database and acquires a pixel value Pn of the image at each standard luminance level Zn, a row pixel Kzi of each standard region segment Kz, a column pixel Kzj of each standard region segment Kz, and a pixel contrast Oz within each standard region segment Kz;
e6: using a formula
Figure 163692DEST_PATH_IMAGE001
Obtaining a deviation value delta 1 in each standard area segment Kze;
e7: using formulas
Figure 575082DEST_PATH_IMAGE002
Acquiring a pixel ge of an output image;
in the formula
Figure 278596DEST_PATH_IMAGE003
The deviation degree delta 1 is the difference value of each pixel value Pne of imaging data in each standard brightness end Zn compared with each pixel value Pn of exposure normal data in each standard brightness level Zn, the fourth processor generates compensation image data according to the pixel ge of an output image, the fourth processor transmits the compensation image data to the imaging unit, the imaging unit receives the compensation image data transmitted by the fourth processor, the imaging unit generates a final picture after receiving the compensation image data, and the fourth processor transmits the final picture to the controller.
C3: if Gn +20% < Gn, it indicates that the captured picture is overexposed, at this time, the third processor generates overexposure data, the third processor transmits the overexposure data to the imaging module, the fourth processor receives the overexposure data transmitted by the third processor, and the fourth processor processes the overexposure data after receiving the overexposure data transmitted by the third processor, and the specific steps of processing the overexposure data are as follows:
k1: the fourth processor obtains Pnk pixel values of the image data and the image at each standard brightness level Zn in the overexposure data;
k2: performing region division, dividing the region into s multiplied by s standard region sections with equal area, and sequentially dividing the standard region sections to obtain all standard region sections Kk, wherein k is more than or equal to 1 and is less than or equal to s ^ 2;
k3: the fourth processor respectively obtains the row pixel Kzik and the column pixel Kzjk of each standard region segment Kze, s ^2 is more than or equal to 1 and less than or equal to zik, and s ^2 is more than or equal to 1 and less than or equal to zjk;
k4: the fourth processor sequentially acquires the pixel quantities Pdk and Pek in each standard regional segment Kk when the standard brightness level is n =1 and n = 256;
k5: the fourth processor reads the imaging data stored in the image data reference database and acquires a pixel value Pn of the image at each standard luminance level Zn, a row pixel Kzi of each standard region segment Kz, a column pixel Kzj of each standard region segment Kz, and a pixel contrast Oz within each standard region segment Kz;
k6: using formulas
Figure 417322DEST_PATH_IMAGE008
Obtaining a bias degree δ 2 within each standard region segment Kze;
k7: using a formula
Figure 93154DEST_PATH_IMAGE004
Acquiring a pixel gk of an output image;
in the formula
Figure 675445DEST_PATH_IMAGE009
The deviation degree delta 2 is a difference value between each pixel value Pnk of imaging data in each standard brightness level Zn and each pixel value Pn of exposure normal data in each standard brightness level Zn, the fourth processor generates weakened image data according to a pixel ge of an output image, the fourth processor transmits the weakened image data to the imaging unit, the imaging unit receives the weakened image data transmitted by the fourth processor, the imaging unit generates a final picture after receiving the weakened image data, and the fourth processor transmits the final picture to the controller.
The controller is used for receiving the final picture transmitted by the fourth processor, the controller generates a storage instruction for the final picture after receiving the final picture transmitted by the fourth processor, and the controller transmits the storage instruction to the digital video storage module.
The digital video storage module comprises a fifth processor and a binary file generator, the digital video storage module is used for converting picture data into electric signals for storage, the digital video storage module receives a storage instruction transmitted by the controller, the digital video storage module transmits a picture acquisition instruction after receiving the storage instruction transmitted by the controller, the controller receives the picture acquisition instruction transmitted by the digital video storage module, the controller transmits a final picture to the digital video storage module after receiving the picture acquisition instruction transmitted by the digital video storage module, the digital video storage module receives a final picture transmitted by the controller, and the digital video storage module converts the final picture into the electric signals for storage according to a certain rule after receiving the final picture transmitted by the controller.
The specific transformation rules are as follows:
r1: carrying out grid division, and dividing the final picture into r grids in equal area;
r2: color division is carried out, and the colors are divided into t colors;
r3: the fifth processor respectively acquires the color type of each grid and the position of each grid on the final picture;
the color type of each grid and the position of each grid on the final picture are recorded in the digital document and transmitted to the binary file generator by the fifth processor, the digital document transmitted by the fifth processor is received by the binary file generator, and the digital document is generated into the binary file in a binary form for storage and the original digital document is deleted by the binary file after the digital document transmitted by the fifth processor is received by the binary file.
The display module is used for reading and displaying a camera shooting result, the display module comprises a sixth processor and a camera shooting display unit, the controller transmits a viewing instruction to the camera shooting display unit, the sixth sensor receives the viewing instruction transmitted by the controller, the sixth sensor generates a file acquisition instruction after receiving the viewing instruction transmitted by the controller, the sixth sensor generates an acquisition instruction and transmits the acquisition instruction to the digital video screen storage module, the fifth sensor receives the acquisition instruction transmitted by the display module, the fifth sensor transmits a binary file to the display module after receiving the acquisition instruction transmitted by the display module, the sixth sensor receives the binary file transmitted by the digital video storage module, and the sixth sensor reads the binary file and acquires image information after receiving the binary file transmitted by the digital video storage module, the image information is transmitted to the image pick-up display unit by the sixth sensor, the image pick-up display unit receives the image information transmitted by the sixth sensor, and the image pick-up display unit restores the image information transmitted by the sixth sensor into a complete image according to the position and brightness information of the grid.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is illustrative and explanatory only and is not intended to be exhaustive or to limit the invention to the precise embodiments described, and various modifications, additions, and substitutions may be made by those skilled in the art without departing from the scope of the invention or exceeding the scope of the claims.

Claims (6)

1. The night vision method based on digital video photography is characterized by comprising a photosensitive module, a night vision camera shooting end, an image analysis module, an imaging module, a controller, a digital video storage module and a display module;
the night vision camera shooting end comprises a second processor, a laser lighting module and a camera shooting unit, wherein the laser lighting module is used for emitting infrared laser to carry out infrared ray projection on a camera shooting environment, the camera shooting unit is used for receiving infrared rays reflected by an object and generating images, the second processor transmits the images to an image analysis module, the image analysis module is used for analyzing the images, the image analysis module comprises a third processor, and the third processor analyzes the images after receiving the images transmitted by the second processor to obtain an exposure ratio Gn of the images;
the specific analysis steps are as follows:
c1: if G-20% and Gn are more than or equal to G +20%, indicating that the camera shooting picture is normally exposed, generating exposure normal data by a third processor and transmitting the exposure normal data to an imaging module, wherein the imaging module is used for processing the image and forming a final picture, the imaging module comprises a fourth processor, an imaging unit and an image data reference database, and the fourth processor processes the exposure normal data after receiving the exposure normal data transmitted by the third processor to obtain a pixel G of an output image;
the fourth processor generates normal image data according to pixels g of an output image, the fourth processor respectively transmits the normal image data to the imaging unit and the image data reference database, the imaging unit generates a final image after receiving the normal image data transmitted by the fourth processor, the fourth processor transmits the final image to the controller, and the image data reference database stores the normal image data in the image data reference database for storage;
c2: if Gn is 0< Gn < G-20%, the image is underexposed, the third processor generates underexposure data and transmits the underexposure data to the imaging module, the fourth processor receives the underexposure data transmitted by the third processor and processes the underexposure data, and the specific underexposure data processing steps are as follows:
e1: the fourth processor acquires the image data in the underexposure data and the pixel value Pne of the image at each standard brightness level Zn;
e2: performing area division, namely dividing the area into s multiplied by s standard area sections with equal area, and sequentially dividing the standard area sections to obtain all standard area sections Kze, wherein ze is more than or equal to 1 and is less than or equal to s ^ 2;
e3: the fourth processor respectively obtains the row pixel Kzie and the column pixel Kzje of each standard region segment Kze, s ^2 is more than or equal to 1 and less than or equal to zie, and s ^2 is more than or equal to 1 and less than or equal to zje;
e4: the fourth processor sequentially acquires the pixel quantities Pdz and Pez in each standard region segment Kz at the standard luminance level of n =0 and n = 256;
e5: the fourth processor reads the imaging data stored in the image data reference database and acquires a pixel value Pn of the image at each standard luminance level Zn, a row pixel Kzi of each standard region segment Kz, a column pixel Kzj of each standard region segment Kz, and a pixel contrast Oz within each standard region segment Kz;
e6: using formulas
Figure 853253DEST_PATH_IMAGE001
Obtaining a bias degree δ 1 within each standard region segment Kze;
e7: using formulas
Figure 990973DEST_PATH_IMAGE002
Acquiring a pixel ge of an output image;
in the formula
Figure 453179DEST_PATH_IMAGE003
The fourth processor generates compensation image data according to the pixel ge of the output image and transmits the compensation image data to the imaging unit, the imaging unit generates a final picture after receiving the compensation image data, and the fourth processor transmits the final picture to the controller;
c3: if Gn +20%<Gn, representing the overexposure of the camera picture, wherein the third processor generates overexposure data and transmits the overexposure data to the imaging module, and the fourth processor utilizes a formula for the overexposure data
Figure 676349DEST_PATH_IMAGE004
Processing is carried out to obtain a pixel gk of an output image;
δ 2 is a deviation value degree in each standard region segment Kze, Kzik is a row pixel of each standard region segment, Kzjk is a column pixel of each standard region segment, Oz is a pixel contrast in imaging data stored in an image data reference database and in each standard region segment Kz, the fourth processor generates weakened image data according to a pixel ge of an output image and transmits the weakened image data to the imaging unit, the imaging unit generates a final picture after receiving the weakened image data, and the fourth processor transmits the final picture to the controller.
2. The night vision method for digital video photography according to claim 1, wherein the third processor obtains the image exposure ratio Gn by the following steps:
b1: dividing the brightness of the image into 256 standard brightness levels, wherein the standard brightness level is one standard brightness level from 0 to 1, and repeating the steps to obtain all standard brightness levels Zn, wherein n =1.. 256;
b2: the third processor acquires a pixel value Pn of an image at each standard brightness level Zn;
b3: presetting an exposure ratio threshold value as G;
b4: using formulas
Figure 678941DEST_PATH_IMAGE005
Acquiring an exposure ratio Gn of the image;
b5: the comparative analysis is performed by comparing the preset exposure threshold value G and the exposure ratio Gn of the image.
3. The night vision method for digital video photography according to claim 1, wherein the specific exposure normal data processing steps are as follows:
d1: the fourth processor acquires image data in exposure normal data and a pixel value Pn of an image at each standard brightness level Zn;
d2: performing region division, dividing the region into s multiplied by s standard region sections with equal area, and sequentially dividing the standard region sections to obtain all the standard region sections Kz, wherein z is more than or equal to 1 and is less than or equal to s ^ 2;
d3: the fourth processor respectively obtains row pixels Kzi and column pixels Kzj of each standard region Kz, wherein zi is more than or equal to 1 and less than or equal to s ^2, and zj is more than or equal to 1 and less than or equal to s ^ 2;
d4: the fourth processor sequentially obtains the pixel quantities Pd and Pe in each standard regional section Kz when the standard brightness level is n =0 and n = 256;
d5: using formulas
Figure 920435DEST_PATH_IMAGE006
Acquiring the pixel contrast Oz in each standard region Kz;
d6: using formulas
Figure 553542DEST_PATH_IMAGE007
Acquiring a pixel g of an output image;
f (Kzi, Kzj) is the pixels of the source image within each standard region segment.
4. The night vision method based on digital video photography as claimed in claim 1, wherein the photosensitive module comprises a first processor and a photosensitive sensor, the photosensitive module is used for acquiring the illumination intensity around the photographic environment, and the specific steps are as follows:
the method comprises the following steps: presetting an illumination threshold value as L;
step two: performing region division, namely dividing a region into 9 standard region sections from far to near in distance from left to right in the direction, wherein the leftmost standard region section is in the direction, the farthest standard region section is in the distance, and the rest is in the same order to obtain all the standard region sections Sn, and n =1.. 9;
step three: and time division is carried out, the time is divided into 24 standard time periods, and the standard time periods are divided from 00: 00 to 01: 00 is a standard time period, and all the standard time periods Ta are obtained by analogy in sequence, wherein a =1.. 24;
step four: respectively placing a photosensitive sensor in each standard area section Sn, and collecting the highest illumination intensity La of the current area within the standard time period Ta;
step five: comparing and analyzing the highest illumination intensity La of the current region in the standard time period Ta with an illumination threshold L respectively, wherein the specific analysis steps are as follows:
a1: if La < L, the illumination intensity of the current area in the current standard time period Ta is weak or no visible light exists, and the first processor generates a night vision instruction and transmits the night vision instruction to the night vision camera terminal;
a2: if La is larger than or equal to L, the illumination intensity of the current area in the current standard time period Ta is high, and the first processor does not perform any processing.
5. The night vision method for digital-video-based photography according to claim 1, wherein the digital video storage module comprises a fifth processor and a binary file generator, the digital video storage module is used for converting the picture data into electric signals for storage, the controller transmits the final picture to the digital video storage module, and the digital video storage module receives the final picture transmitted by the controller and converts the final picture into electric signals for storage according to a certain rule;
the specific transformation rules are as follows:
r1: carrying out grid division, and dividing the final picture into 50000 grids in equal area;
r2: color division is carried out, and the colors are divided into 162 colors;
r3: the fifth processor respectively acquires the color type of each grid and the position of each grid on the final picture;
the fifth processor records the color type of each grid and the position of each grid on the final picture in a digital document and transmits the digital document to the binary file generator, the binary file generator receives the digital document transmitted by the fifth processor, and after receiving the digital document transmitted by the fifth processor, the binary file generates the digital document into a binary file in a binary form for storage and deletes the original digital document.
6. The night vision method based on digital video photography of claim 1, wherein the display module is used to read and display the result of the camera shooting, and the display module comprises a sixth processor and a camera shooting display unit.
CN202210846085.XA 2022-07-19 2022-07-19 Night vision method based on digital video photography Pending CN115002356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210846085.XA CN115002356A (en) 2022-07-19 2022-07-19 Night vision method based on digital video photography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210846085.XA CN115002356A (en) 2022-07-19 2022-07-19 Night vision method based on digital video photography

Publications (1)

Publication Number Publication Date
CN115002356A true CN115002356A (en) 2022-09-02

Family

ID=83021177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210846085.XA Pending CN115002356A (en) 2022-07-19 2022-07-19 Night vision method based on digital video photography

Country Status (1)

Country Link
CN (1) CN115002356A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device
US20200195827A1 (en) * 2018-12-12 2020-06-18 Vivotek Inc. Metering compensation method and related monitoring camera apparatus
WO2021051222A1 (en) * 2019-09-16 2021-03-25 北京数字精准医疗科技有限公司 Endoscope system, mixed light source, video acquisition device and image processor
WO2022001648A1 (en) * 2020-06-30 2022-01-06 维沃移动通信有限公司 Image processing method and apparatus, and device and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device
CN111418201A (en) * 2018-03-27 2020-07-14 华为技术有限公司 Shooting method and equipment
US20200195827A1 (en) * 2018-12-12 2020-06-18 Vivotek Inc. Metering compensation method and related monitoring camera apparatus
WO2021051222A1 (en) * 2019-09-16 2021-03-25 北京数字精准医疗科技有限公司 Endoscope system, mixed light source, video acquisition device and image processor
WO2022001648A1 (en) * 2020-06-30 2022-01-06 维沃移动通信有限公司 Image processing method and apparatus, and device and medium

Similar Documents

Publication Publication Date Title
KR102376901B1 (en) Imaging control method and imaging device
CN108055452B (en) Image processing method, device and equipment
CN108712608B (en) Terminal equipment shooting method and device
US8379140B2 (en) Video image pickup apparatus and exposure guide display method
EP2800352B1 (en) Image pickup apparatus and image processing apparatus
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
CN102783135A (en) Method and apparatus for providing a high resolution image using low resolution
CN103685971A (en) Imaging apparatus and control method of same
CN108156369B (en) Image processing method and device
US20120127336A1 (en) Imaging apparatus, imaging method and computer program
CN103905724A (en) Imaging apparatus and lighting control method
CN112040139B (en) Light supplementing method for camera imaging
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN109040607A (en) Image formation control method, device, electronic equipment and computer readable storage medium
WO2019047620A1 (en) Imaging device and imaging method
CN108833802A (en) Exposal control method, device and electronic equipment
CN114979500B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN108900785A (en) Exposal control method, device and electronic equipment
CN112004029A (en) Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium
CN103369252A (en) Image processing apparatus and control method therefor
CN107682611B (en) Focusing method and device, computer readable storage medium and electronic equipment
JP2007202128A (en) Imaging apparatus and image data correcting method
JP7057079B2 (en) Image processing device, image pickup device, image processing method, and program
JP2014179920A (en) Imaging apparatus, control method thereof, program, and storage medium
JP2009200743A (en) Image processor, image processing method, image processing program and imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220902