CN117714902B - Ultralow-illumination image video online monitoring method and system - Google Patents

Ultralow-illumination image video online monitoring method and system Download PDF

Info

Publication number
CN117714902B
CN117714902B CN202410159964.4A CN202410159964A CN117714902B CN 117714902 B CN117714902 B CN 117714902B CN 202410159964 A CN202410159964 A CN 202410159964A CN 117714902 B CN117714902 B CN 117714902B
Authority
CN
China
Prior art keywords
image
video
illumination
ultralow
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410159964.4A
Other languages
Chinese (zh)
Other versions
CN117714902A (en
Inventor
王非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Senxu General Equipment Technology Co ltd
Original Assignee
Guangdong Senxu General Equipment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Senxu General Equipment Technology Co ltd filed Critical Guangdong Senxu General Equipment Technology Co ltd
Priority to CN202410159964.4A priority Critical patent/CN117714902B/en
Publication of CN117714902A publication Critical patent/CN117714902A/en
Application granted granted Critical
Publication of CN117714902B publication Critical patent/CN117714902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image video on-line monitoring method and system of ultra-low illumination, the method includes obtaining the ultra-low illumination video, extracting a plurality of video frame images, marking the plurality of video frame images according to the frame-by-frame sequence; establishing a processing template, dividing the area of the processing template and attaching labels, wherein each label corresponds to one label; sequentially placing a plurality of video frame images into corresponding areas according to the corresponding relation between the labels and the labels to obtain a processing template, and taking the processing template as an original image; constructing and training an image enhancement model, inputting an original image into the image enhancement model, and outputting an enhanced image after the image enhancement model performs image enhancement operation on the original image; and taking the enhanced image as a processing template, dividing a corresponding region according to the label, and cutting each region to obtain a plurality of enhanced video frame images so as to synthesize an enhanced online monitoring video. The method and the device can improve the enhancement processing efficiency of the ultralow-illumination video and ensure the enhancement effect.

Description

Ultralow-illumination image video online monitoring method and system
Technical Field
The invention belongs to the technical field of image enhancement, and particularly relates to an ultralow-illuminance image video on-line monitoring method and system.
Background
Since a large number of image capturing apparatuses capture a relatively dark image under conditions such as insufficient illumination or shielding, particularly images captured at night, the content to be monitored is not substantially visible, and video enhancement in a low-illumination environment is very necessary.
There are two general types of existing low-illumination image video enhancement, one is an image transformation-based algorithm for enhancing by using frequency domain information of an image, the other is a Retinex theory-based enhancement algorithm, such as a single-scale Retinex algorithm (SSR algorithm) for enhancing details and contrast of the image by estimating global illumination distribution and reflection distribution of the image, a multi-scale Retinex algorithm (MSR algorithm) which introduces a plurality of scale Gaussian filters on the basis of the SSR algorithm, and a MSRCR algorithm (multi-Scale Retinex with Color Restoration) which introduces a color recovery module on the basis of the MSR algorithm; the algorithms are used for processing video frame images, so that the enhancement effect is greatly facilitated, but in the case of pursuing enhancement conversion efficiency instead of pursuing high enhancement effect, the situation of too slow conversion efficiency still exists, so that how to improve the conversion efficiency and guarantee the enhancement effect is a key problem.
Disclosure of Invention
Aiming at the problems of the background technology, the invention provides an ultra-low illumination image video on-line monitoring method.
To achieve the purpose, the invention adopts the following technical scheme:
an ultralow-illuminance image video on-line monitoring method, step A: acquiring an ultralow-illumination video allowing online monitoring within a delay time, extracting a plurality of video frame images from the ultralow-illumination video, and marking the plurality of video frame images according to a frame-by-frame sequence;
And (B) step (B): establishing a processing template, dividing the processing template into areas, attaching labels to each divided area, wherein each label corresponds to one label;
Sequentially placing a plurality of video frame images into corresponding areas according to the corresponding relation between the labels and the labels, splicing the plurality of video frame images to a processing template, and taking the processing template as an original image;
Step C: constructing and training an image enhancement model, inputting an original image into the trained image enhancement model, and outputting an enhanced image after the trained image enhancement model performs image enhancement operation on the original image;
Step D: taking the enhanced image as a processing template, dividing corresponding areas according to labels, and cutting each area to obtain a plurality of enhanced video frame images;
step E: and sequentially placing the enhanced video frame images into a time axis according to the marks so as to synthesize the enhanced online monitoring video.
Preferably, between the step D and the step E, further comprising performing a judging operation and a complex enhancing operation:
e, judging whether the enhanced video frame image is lower than preset illumination, if yes, executing re-enhancement operation, otherwise, executing step E;
the complex enhancement operation includes:
step D1: selecting a video frame image lower than the preset illuminance, amplifying the video frame image, and reserving the video frame image higher than the preset illuminance;
Step D2: b, combining the areas divided by the processing templates in the step B, combining the labels corresponding to the combined areas, and re-corresponding the labels after combination to the labels of the selected video frame images lower than the preset illumination;
Step D3: sequentially placing the amplified video frame images into the combined area according to the corresponding relation between the combined label and the selected label of the video frame images lower than the preset illumination to obtain a new spliced processing template, inputting the new spliced processing template into an image enhancement model as a new original image for secondary enhancement, and outputting a secondary enhancement image;
Step D4: d, executing the operation of the step D on the secondary enhanced image to obtain a secondary enhanced video frame image, and reducing the secondary enhanced video frame image to restore the original size;
Step D5: and E, re-executing the step E on the video frame image after the secondary enhancement and the video frame image with the illuminance higher than the preset illuminance.
Preferably, in the step C, training the image enhancement model includes:
Step C1: constructing a training sample, wherein the training sample comprises an ultralow-illumination initial image;
step C2: preprocessing a training sample, wherein the preprocessing comprises the steps of denoising and contrast enhancement operation on an ultralow-illumination initial image;
Step C3: according to the illumination condition of the ultra-low illumination environment, performing illumination compensation on the initial image;
Step C4: processing the initial image after illumination compensation by using an enhancement algorithm;
Step C5: performing noise suppression on the enhanced image after the step C4 is executed;
step C6: and optimizing the enhanced image after noise suppression, wherein the optimization comprises color correction and edge enhancement.
Preferably, in the step C3, it includes:
Step C31: acquiring environment illumination information of ultralow illumination;
Step C32: extracting the first n high-brightness pixels in the initial image, removing the difference from the values of the first n high-brightness pixels, averaging to obtain an estimated brightest pixel value, and taking the estimated highest-brightness pixel value as an estimated atmospheric light value;
step C33: acquiring the transmissivity of each pixel in the initial image according to the estimated atmospheric light value;
Step C34: performing illumination compensation on the initial image through a formula I;
equation one: i_compensated= (I-a)/max (t, t 0) +a;
wherein: i_compensated represents the illumination compensated image;
i represents I is the initial image input;
a represents an estimated atmospheric light value;
t represents transmittance;
t0 represents a positive number and may be 1.
Preferably, in the step C4, it includes:
Step C41, calculating a cumulative distribution function CDF of the illumination compensated image and the target image;
Step C42: mapping the CDF of the illumination compensated image to the CDF of the target image to obtain a mapping function;
Step C43: each pixel value of the illumination-compensated image is mapped to a pixel value of the target image using a mapping function.
Preferably, in the step C41, it includes:
step C411: converting the illumination compensated image and the target image into a gray scale image;
Step C412: acquiring histograms of the image after illumination compensation and the target image, namely counting the number of pixels of each gray level in the two images;
Step C413: obtaining normalized histograms of the illumination-compensated image and the target image, namely dividing the number of pixels of each gray level in the two images by the total number of pixels to obtain the pixel probability of each gray level in the two images;
step C414: and acquiring a cumulative distribution function CDF of the illumination compensated image and the target image, namely accumulating the pixel probability of each gray level in the two images.
Preferably, in the step C5, it includes:
Step C51: acquiring a value of each pixel point in the enhanced image after the step C4 is executed;
Step C52: ordering the pixel values in the surrounding neighborhood of a pixel point, and selecting the pixel value positioned in the middle position as a new value of the current pixel point;
step C53: step C52 is sequentially performed for each pixel of the enhanced image after step C4.
Preferably, the system for monitoring the ultralow-illuminance image video on line comprises a video monitoring device, a video conversion module and a video output device, wherein the video conversion module is applied with the ultralow-illuminance image video on-line monitoring method;
the video monitoring device is used for shooting ultralow-illumination video;
the video conversion module is used for converting the shot ultralow-illuminance video into an enhanced online monitoring video;
the video output device is used for outputting the enhanced online monitoring video.
Compared with the prior art, one of the technical schemes has the following beneficial effects:
1. the invention solves the problem of frame clamping in video synthesis and splicing by using the time delay time length in the allowable range as a node for division;
2. the invention carries out enhancement processing by combining a plurality of video frame images into one original image so as to improve the processing efficiency;
3. the invention realizes secondary enhancement through the complex enhancement operation, so as to ensure a certain enhancement effect while improving the treatment efficiency.
Drawings
FIG. 1 is a flow chart of the ultra-low illumination image video on-line monitoring of the present invention;
FIG. 2 is a flow chart of the complex enhancement operation of the present invention;
Fig. 3 is a flow chart of the training image enhancement model of the present invention.
Detailed Description
The technical scheme of the invention is further described below by the specific embodiments with reference to the accompanying drawings.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The application provides an ultra-low illumination image video on-line monitoring method, as shown in fig. 1, step A: acquiring an ultralow-illumination video allowing online monitoring within a delay time, extracting a plurality of video frame images from the ultralow-illumination video, and marking the plurality of video frame images according to a frame-by-frame sequence;
in the application, in order to realize the effect of converting the ultralow-illuminance video in real time without causing a frame clamping effect, the time delay time length in the allowable range is used as a node for division, if the time delay time length is 3 seconds, the enhancement operation is carried out on the ultralow-illuminance video with the time length of 3 seconds, after the ultralow-illuminance video with the time length of 3 seconds is processed, the ultralow-illuminance video with the time length of 3 seconds is extracted again for processing, and the frame clamping problem of video synthesis and splicing is solved by using the time delay time length as the node.
For a segment of ultra-low illumination video, a plurality of video frame images are extracted, and the reference numerals are carried out in a frame-by-frame order, for example, reference numeral 1 indicates a first frame image.
And (B) step (B): establishing a processing template, dividing the processing template into areas, attaching labels to each divided area, wherein each label corresponds to one label;
Sequentially placing a plurality of video frame images into corresponding areas according to the corresponding relation between the labels and the labels, splicing the plurality of video frame images to a processing template, and taking the processing template as an original image;
In the application, a processing template is established, the size of the template needs to accommodate a plurality of video frame images so as to facilitate the splicing of the plurality of video frame images, therefore, the number of the video frame images processed each time needs to be determined in advance, the size of the processing template is determined in advance according to the number, the processing template is divided into areas, each area is provided with a specific label, each frame image is placed into an area, the labels of the area and the labels of the frame images are mapped with each other, and therefore, the processing template is used as an original image, the synchronous processing of the plurality of video frame images is realized, so that the processing efficiency is improved.
Step C: constructing and training an image enhancement model, inputting an original image into the trained image enhancement model, and outputting an enhanced image after the trained image enhancement model performs image enhancement operation on the original image;
Step D: taking the enhanced image as a processing template, dividing corresponding areas according to labels, and cutting each area to obtain a plurality of enhanced video frame images;
After the enhanced image corresponding to the processing template is obtained, the enhanced image needs to be cut, so that the enhanced video frame image is obtained, and the video is conveniently synthesized.
Step E: and sequentially placing the enhanced video frame images into a time axis according to the marks so as to synthesize the enhanced online monitoring video.
The enhanced video frame images can be sequentially placed in a time axis according to the marks, so that confusion is avoided, and the reduction degree of the video is prevented from being influenced.
Preferably, between the step D and the step E, further comprising performing a judging operation and a complex enhancing operation:
e, judging whether the enhanced video frame image is lower than preset illumination, if yes, executing re-enhancement operation, otherwise, executing step E;
In this technical solution, although the enhancement effect is not pursued, the stability of the enhancement effect still needs to be ensured, since a plurality of video frame images are combined together to be enhanced synchronously, the visibility of the video frame images with partial areas is not high, the illumination of the image content is still low, normally, for the video frame images with excessive combined quantity, whether the combined quantity is lower than the preset illumination needs to be judged, and it is understood that the preset illumination is a concept for judging whether the image content can be seen clearly or not, if the combined quantity is lower than the preset illumination, the complex enhancement operation needs to be executed.
As shown in fig. 2, the complex enhancement operation includes:
step D1: selecting a video frame image lower than the preset illuminance, amplifying the video frame image, and reserving the video frame image higher than the preset illuminance;
Step D2: b, combining the areas divided by the processing templates in the step B, combining the labels corresponding to the combined areas, and re-corresponding the labels after combination to the labels of the selected video frame images lower than the preset illumination;
Step D3: sequentially placing the amplified video frame images into the combined area according to the corresponding relation between the combined label and the selected label of the video frame images lower than the preset illumination to obtain a new spliced processing template, inputting the new spliced processing template into an image enhancement model as a new original image for secondary enhancement, and outputting a secondary enhancement image;
Step D4: d, executing the operation of the step D on the secondary enhanced image to obtain a secondary enhanced video frame image, and reducing the secondary enhanced video frame image to restore the original size;
Step D5: and E, re-executing the step E on the video frame image after the secondary enhancement and the video frame image with the illuminance higher than the preset illuminance.
The essence of the complex enhancement operation is the same as that of the steps A to D, but only because the number of video frame images of the complex enhancement operation is reduced and the processing template is unchanged, the video frame images need to be amplified, and the processing template also needs to combine the corresponding number of areas so as to adapt to the amplified video frame images;
Further, the labels of the video frame images cannot be changed, because the video is synthesized according to the labels later, and the areas of the processing template are changed, so that the labels need to be replaced again, and the method is most easily realized by combining the labels of all the combined areas and remapping the embedded video frame images after combining.
And then putting the processing template into an image enhancement model for re-enhancement, outputting a video frame image after secondary enhancement, carrying out reduction processing on the video frame image after secondary enhancement so as to restore to be consistent with other non-selected video frame images higher than preset illumination, and finally synthesizing the video frame images into the video after enhancement.
Preferably, as shown in fig. 3, in the step C, training the image enhancement model includes:
Step C1: constructing a training sample, wherein the training sample comprises an ultralow-illumination initial image;
step C2: preprocessing a training sample, wherein the preprocessing comprises the steps of denoising and contrast enhancement operation on an ultralow-illumination initial image;
Step C3: according to the illumination condition of the ultra-low illumination environment, performing illumination compensation on the initial image;
Step C4: processing the initial image after illumination compensation by using an enhancement algorithm;
Step C5: performing noise suppression on the enhanced image after the step C4 is executed;
step C6: and optimizing the enhanced image after noise suppression, wherein the optimization comprises color correction and edge enhancement.
Preferably, in the step C3, it includes:
Step C31: acquiring environment illumination information of ultralow illumination;
Step C32: extracting the first n high-brightness pixels in the initial image, removing the difference from the values of the first n high-brightness pixels, averaging to obtain an estimated brightest pixel value, and taking the estimated highest-brightness pixel value as an estimated atmospheric light value;
step C33: acquiring the transmissivity of each pixel in the initial image according to the estimated atmospheric light value;
Transmittance, which represents the degree of attenuation of light as it passes through the atmosphere, can be used to compensate for illumination changes in an image.
Step C34: performing illumination compensation on the initial image through a formula I;
equation one: i_compensated= (I-a)/max (t, t 0) +a;
wherein: i_compensated represents the illumination compensated image;
i represents I is the initial image input;
a represents an estimated atmospheric light value;
t represents transmittance;
t0 represents a positive number, which may be 1, and t0 is a small positive number for avoiding the division by zero.
After illumination compensation, the target object in the image can be more clearly visible.
Preferably, in the step C4, it includes:
Step C41, calculating a cumulative distribution function CDF of the illumination compensated image and the target image;
Step C42: mapping the CDF of the illumination compensated image to the CDF of the target image to obtain a mapping function;
Step C43: each pixel value of the illumination-compensated image is mapped to a pixel value of the target image using a mapping function.
In this embodiment, the final output of step C43 is the target image, and the target image is taken as the enhanced image processed in step C4 into the operation of step C5.
Specifically, CDFs of the illumination-compensated image and the target image may be used as coordinates of an x-axis and a y-axis, and an interpolation method (e.g., linear interpolation) may be used to obtain a mapping function.
Preferably, in the step C41, it includes:
step C411: converting the illumination compensated image and the target image into a gray scale image;
Step C412: acquiring histograms of the image after illumination compensation and the target image, namely counting the number of pixels of each gray level in the two images;
Step C413: obtaining normalized histograms of the illumination-compensated image and the target image, namely dividing the number of pixels of each gray level in the two images by the total number of pixels to obtain the pixel probability of each gray level in the two images;
step C414: and acquiring a cumulative distribution function CDF of the illumination compensated image and the target image, namely accumulating the pixel probability of each gray level in the two images.
Preferably, in the step C5, it includes:
Step C51: acquiring a value of each pixel point in the enhanced image after the step C4 is executed;
Step C52: ordering the pixel values in the surrounding neighborhood of a pixel point, and selecting the pixel value positioned in the middle position as a new value of the current pixel point;
step C53: step C52 is sequentially performed for each pixel of the enhanced image after step C4.
Preferably, the system for monitoring the ultralow-illuminance image video on line comprises a video monitoring device, a video conversion module and a video output device, wherein the video conversion module is applied with the ultralow-illuminance image video on-line monitoring method;
the video monitoring device is used for shooting ultralow-illumination video;
the video conversion module is used for converting the shot ultralow-illuminance video into an enhanced online monitoring video;
the video output device is used for outputting the enhanced online monitoring video.
The technical principle of the present invention is described above in connection with the specific embodiments. The description is made for the purpose of illustrating the general principles of the invention and should not be taken in any way as limiting the scope of the invention. Other embodiments of the invention will be apparent to those skilled in the art from consideration of this specification without undue burden.

Claims (8)

1. An ultralow-illuminance image video on-line monitoring method is characterized by comprising the following steps of:
Step A: acquiring an ultralow-illumination video allowing online monitoring within a delay time, extracting a plurality of video frame images from the ultralow-illumination video, and marking the plurality of video frame images according to a frame-by-frame sequence;
And (B) step (B): establishing a processing template, dividing the processing template into areas, attaching labels to each divided area, wherein each label corresponds to one label;
Sequentially placing a plurality of video frame images into corresponding areas according to the corresponding relation between the labels and the labels, splicing the plurality of video frame images to a processing template, and taking the processing template as an original image;
Step C: constructing and training an image enhancement model, inputting an original image into the trained image enhancement model, and outputting an enhanced image after the trained image enhancement model performs image enhancement operation on the original image;
Step D: taking the enhanced image as a processing template, dividing corresponding areas according to labels, and cutting each area to obtain a plurality of enhanced video frame images;
step E: and sequentially placing the enhanced video frame images into a time axis according to the marks so as to synthesize the enhanced online monitoring video.
2. The method for on-line monitoring of ultralow-illuminance image video of claim 1, wherein the method comprises the following steps:
Between the step D and the step E, further comprising performing a judging operation and a complex enhancing operation:
e, judging whether the enhanced video frame image is lower than preset illumination, if yes, executing re-enhancement operation, otherwise, executing step E;
the complex enhancement operation includes:
step D1: selecting a video frame image lower than the preset illuminance, amplifying the video frame image, and reserving the video frame image higher than the preset illuminance;
Step D2: b, combining the areas divided by the processing templates in the step B, combining the labels corresponding to the combined areas, and re-corresponding the labels after combination to the labels of the selected video frame images lower than the preset illumination;
Step D3: sequentially placing the amplified video frame images into the combined area according to the corresponding relation between the combined label and the selected label of the video frame images lower than the preset illumination to obtain a new spliced processing template, inputting the new spliced processing template into an image enhancement model as a new original image for secondary enhancement, and outputting a secondary enhancement image;
Step D4: d, executing the operation of the step D on the secondary enhanced image to obtain a secondary enhanced video frame image, and reducing the secondary enhanced video frame image to restore the original size;
Step D5: and E, re-executing the step E on the video frame image after the secondary enhancement and the video frame image with the illuminance higher than the preset illuminance.
3. The method for on-line monitoring of ultralow-illuminance image video of claim 1, wherein the method comprises the following steps:
In said step C, training the image enhancement model comprises:
Step C1: constructing a training sample, wherein the training sample comprises an ultralow-illumination initial image;
step C2: preprocessing a training sample, wherein the preprocessing comprises the steps of denoising and contrast enhancement operation on an ultralow-illumination initial image;
Step C3: according to the illumination condition of the ultra-low illumination environment, performing illumination compensation on the initial image;
Step C4: processing the initial image after illumination compensation by using an enhancement algorithm;
Step C5: performing noise suppression on the enhanced image after the step C4 is executed;
step C6: and optimizing the enhanced image after noise suppression, wherein the optimization comprises color correction and edge enhancement.
4. The method for online monitoring of ultralow-illuminance image video according to claim 3, wherein the method comprises the following steps:
In the step C3, it includes:
Step C31: acquiring environment illumination information of ultralow illumination;
step C32: extracting the first n high-brightness pixels in the initial image, removing the difference from the values of the first n high-brightness pixels, averaging to obtain an estimated highest-brightness pixel value, and taking the estimated highest-brightness pixel value as an estimated atmospheric light value;
step C33: acquiring the transmissivity of each pixel in the initial image according to the estimated atmospheric light value;
Step C34: performing illumination compensation on the initial image through a formula I;
equation one: i_compensated= (I-a)/max (t, t 0) +a;
wherein: i_compensated represents the illumination compensated image;
I represents an input initial image;
a represents an estimated atmospheric light value;
t represents transmittance;
t0 is 1.
5. The method for online monitoring of ultralow-illuminance image video according to claim 3, wherein the method comprises the following steps:
in the step C4, it includes:
Step C41, calculating a cumulative distribution function CDF of the illumination compensated image and the target image;
Step C42: mapping the CDF of the illumination compensated image to the CDF of the target image to obtain a mapping function;
Step C43: each pixel value of the illumination-compensated image is mapped to a pixel value of the target image using a mapping function.
6. The method for on-line monitoring of ultralow-illuminance image video of claim 5, wherein the method comprises the following steps:
in the step C41, it includes:
step C411: converting the illumination compensated image and the target image into a gray scale image;
Step C412: acquiring histograms of the image after illumination compensation and the target image, namely counting the number of pixels of each gray level in the two images;
Step C413: obtaining normalized histograms of the illumination-compensated image and the target image, namely dividing the number of pixels of each gray level in the two images by the total number of pixels to obtain the pixel probability of each gray level in the two images;
step C414: and acquiring a cumulative distribution function CDF of the illumination compensated image and the target image, namely accumulating the pixel probability of each gray level in the two images.
7. The method for online monitoring of ultralow-illuminance image video according to claim 3, wherein the method comprises the following steps:
In the step C5, it includes:
Step C51: acquiring a value of each pixel point in the enhanced image after the step C4 is executed;
Step C52: ordering the pixel values in the surrounding neighborhood of a pixel point, and selecting the pixel value positioned in the middle position as a new value of the current pixel point;
step C53: step C52 is sequentially performed for each pixel of the enhanced image after step C4.
8. An ultralow-illuminance image video on-line monitoring system is characterized in that:
The method comprises a video monitoring device, a video conversion module and a video output device, wherein the video conversion module applies the ultralow-illumination image video online monitoring method according to any one of claims 1-7;
the video monitoring device is used for shooting ultralow-illumination video;
the video conversion module is used for converting the shot ultralow-illuminance video into an enhanced online monitoring video;
the video output device is used for outputting the enhanced online monitoring video.
CN202410159964.4A 2024-02-05 2024-02-05 Ultralow-illumination image video online monitoring method and system Active CN117714902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410159964.4A CN117714902B (en) 2024-02-05 2024-02-05 Ultralow-illumination image video online monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410159964.4A CN117714902B (en) 2024-02-05 2024-02-05 Ultralow-illumination image video online monitoring method and system

Publications (2)

Publication Number Publication Date
CN117714902A CN117714902A (en) 2024-03-15
CN117714902B true CN117714902B (en) 2024-04-19

Family

ID=90144612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410159964.4A Active CN117714902B (en) 2024-02-05 2024-02-05 Ultralow-illumination image video online monitoring method and system

Country Status (1)

Country Link
CN (1) CN117714902B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011011542A1 (en) * 2009-07-21 2011-01-27 Integrated Device Technology, Inc. A method and system for detection and enhancement of video images
CN110852965A (en) * 2019-10-31 2020-02-28 湖北大学 Video illumination enhancement method and system based on generation countermeasure network
CN219349035U (en) * 2022-12-31 2023-07-14 广东森旭通用设备科技有限公司 Transmission line fault location sensing terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7265693B2 (en) * 2018-12-28 2023-04-27 株式会社デンソーテン Attached matter detection device and attached matter detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011011542A1 (en) * 2009-07-21 2011-01-27 Integrated Device Technology, Inc. A method and system for detection and enhancement of video images
CN110852965A (en) * 2019-10-31 2020-02-28 湖北大学 Video illumination enhancement method and system based on generation countermeasure network
CN219349035U (en) * 2022-12-31 2023-07-14 广东森旭通用设备科技有限公司 Transmission line fault location sensing terminal

Also Published As

Publication number Publication date
CN117714902A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN109767422B (en) Pipeline detection and identification method based on deep learning, storage medium and robot
JP4986250B2 (en) System and method for enhancing digital image processing with noise removal function
JP4307910B2 (en) Moving image clipping device and method, and program
US9332156B2 (en) Glare and shadow mitigation by fusing multiple frames
Park et al. Single image haze removal with WLS-based edge-preserving smoothing filter
CN114331873B (en) Non-uniform illumination color image correction method based on region division
CN110570360A (en) Retinex-based robust and comprehensive low-quality illumination image enhancement method
CN107404647A (en) Camera lens condition detection method and device
CN114821440B (en) Mobile video stream content identification and analysis method based on deep learning
CN116012232A (en) Image processing method and device, storage medium and electronic equipment
CN117152182B (en) Ultralow-illumination network camera image processing method and device and electronic equipment
CN117714902B (en) Ultralow-illumination image video online monitoring method and system
CN108898561B (en) Defogging method, server and system for foggy image containing sky area
CN112288726B (en) Method for detecting foreign matters on belt surface of underground belt conveyor
CN110186929A (en) A kind of real-time product defect localization method
CN110992287B (en) Method for clarifying non-uniform illumination video
Ismail et al. Adapted single scale retinex algorithm for nighttime image enhancement
CN111931754A (en) Method and system for identifying target object in sample and readable storage medium
KR101468433B1 (en) Apparatus and method for extending dynamic range using combined color-channels transmission map
CN115526811A (en) Adaptive vision SLAM method suitable for variable illumination environment
CN110276722B (en) Video image splicing method
Tang et al. Sky-preserved image dehazing and enhancement for outdoor scenes
KR101418521B1 (en) Image enhancement method and device by brightness-contrast improvement
Shin et al. Variational low-light image enhancement based on a haze model
CN112258548B (en) Moving target extraction method based on improved ViBe algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant