CN116402722A - Video image turbulence suppression processing method and device and video processing equipment - Google Patents

Video image turbulence suppression processing method and device and video processing equipment Download PDF

Info

Publication number
CN116402722A
CN116402722A CN202310609025.0A CN202310609025A CN116402722A CN 116402722 A CN116402722 A CN 116402722A CN 202310609025 A CN202310609025 A CN 202310609025A CN 116402722 A CN116402722 A CN 116402722A
Authority
CN
China
Prior art keywords
image
window
video
turbulence
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310609025.0A
Other languages
Chinese (zh)
Other versions
CN116402722B (en
Inventor
郑小强
张君琦
樊兵
汪俊宏
李承烈
程景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Auto Nanjing Video Technology Co ltd
Original Assignee
New Auto Nanjing Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Auto Nanjing Video Technology Co ltd filed Critical New Auto Nanjing Video Technology Co ltd
Priority to CN202310609025.0A priority Critical patent/CN116402722B/en
Publication of CN116402722A publication Critical patent/CN116402722A/en
Application granted granted Critical
Publication of CN116402722B publication Critical patent/CN116402722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of turbulent video image processing, in particular to a processing method, a device and video processing equipment for inhibiting the turbulent video image, which are used for classifying turbulent video processing into static mode processing and dynamic mode processing according to different observed emphasis points and purposes, wherein the mode can well solve the problems of background restoration blurring, difficult recovery of a moving target, insufficient processing capacity, incapability of real-time processing, complex deployment, scene failure and the like in the existing turbulent processing technology; the method comprises the following steps: acquiring a turbulence video file or a continuous image sequence; pretreatment; separating the image channels, and separating the multi-channel image into single-channel images for respective processing; when no displacement is observed in the turbulence image and an object in a static state is processed, the processing of the image is in a static mode; when a displacement is observed in the turbulence image and an object in a dynamic state is processed, the processing of the image is in a dynamic mode.

Description

Video image turbulence suppression processing method and device and video processing equipment
Technical Field
The present invention relates to the field of turbulent video image processing technologies, and in particular, to a video image turbulent suppression processing method and apparatus, and a video processing device.
Background
In imaging systems, light rays emitted as a physical scene propagate through an anisotropic medium. Since the refractive index of the medium is unevenly distributed in three dimensions, uneven scattering of the refracted light is caused, which in turn leads to distortion of the image. Especially when long-distance shooting is carried out, the distance between a shooting target object and an imaging system is far, the atmospheric density is easily influenced by environmental factors such as temperature and humidity, air pressure, air speed, particulate matters and the like, so that the atmospheric density is inconsistent, the refractive index of a light propagation medium changes, and therefore the light waves of the shooting object are scattered unevenly in space, and the distortion and deformation of a shot video image are caused. In theory, as long as the problem of non-uniformity of the light propagation medium occurs, a disturbance phenomenon of the turbulent image occurs. Such as visible light, infrared/thermal imaging, underwater, surface + underwater cross-media, and the like. Particularly, when the object is far away or long-focal-length imaging is performed, the acquired video image tends to have more serious phenomena of blurring, jitter, pixel offset, flare and the like due to a longer light path of light imaging.
The geometrical shape is severely distorted due to the large difference between the photographed turbulence video and the real object in reality. The turbulence video is directly utilized for target identification, object judgment and detection often cause significant errors, sometimes even can not be identified, and the shooting significance is lost.
The turbulence is generated due to the strong randomness and unpredictability. In the conventional turbulence suppression means, the modeling means is solely adopted to establish an optical propagation model between a physical scene and a camera to restore video images, so that the cost is high, and the universal applicability is not achieved. However, with the restoration method of the lucky image, the most clear image may not be found, resulting in restoration failure. Finding the clearest image immediately, when facing a running object, tends to cause poor recovery of the moving object, and even failure to recover the moving object.
The recovery effect is poor, the recovery of a moving object is difficult to process, the operation process is complex, the calculation force requirement on a computing platform is high, the deployment environment is complex, the landing cost is high, and the real-time processing is difficult to achieve. A number of problems have resulted in current lack of practical application in turbulent video processing.
Disclosure of Invention
In order to solve the technical problems, the invention provides a processing method, a device and video processing equipment for video image turbulence suppression, which are used for classifying turbulence video processing into static mode processing and dynamic mode processing according to different observed emphasis points and purposes, wherein the mode can well solve the problems of background restoration blurring, difficulty in restoring a moving target, insufficient processing capacity, incapability of real-time processing, complex deployment, scene failure and the like in the existing turbulence processing technology, and can switch between the static mode and the dynamic mode at any time in the actual application in the face of different application scenes, and the background object and the moving target in the turbulence video can be restored to the greatest extent by utilizing different image statistics methods and means so as to achieve the purpose of actual application.
The invention discloses a processing method for video image turbulence suppression, which comprises the following steps:
acquiring a turbulent video file or a sequence of successive images: between the same scenes shot by the turbulent video, the front and back multi-frame video has certain correlation, namely N 1 ,N 2 .....N i A group of turbulence videos are collected, and a certain correlation exists between the front and back of the turbulence videos;
because the continuous images have a certain correlation before and after, the current turbulence video to be restored has a certain relation with the previous image sequence, and the weight ratio of each frame of image is defined as a 1 ,a 2 ......a i
Pretreatment: the method comprises the steps of preprocessing a plurality of collected images to a certain extent, including preprocessing such as image stabilization, image pre-denoising, image sharpening and the like, setting the preprocessed images as preprocessed images, wherein the preprocessed images are w in width and h in height, and the image sequence is n 1 ,n 2 .....n i
The image channels are separated, the multi-channel image is separated into single-channel images for processing respectively, and during processing, the image can be separated into three primary color channels of R, G and B, and the channels can be separated into Y, U, V brightness, chromaticity and concentration channels;
when no displacement is observed in the turbulence image and an object in a static state is processed, the processing of the image is in a static mode;
when a displacement is observed in the turbulence image and an object in a dynamic state is processed, the processing of the image is in a dynamic mode.
Further, the static mode includes the steps of: dividing each separated image channel into consistency sub-windows; consistency is obtained for images in the sub-window; introducing an image sequence set weight ratio; calculating an image to be restored by using the consistency window and the weight ratio; calculating to complete the multichannel image, and synthesizing the multichannel image; restoring turbulent video output or storage.
Further, the dynamic mode includes the steps of: dividing each separated image channel into consistency sub-windows; consistency is obtained for images in the sub-window; introducing a difference sub-window to calculate the difference, and calculating the number of the restored sequence sets; calculating an image to be restored by utilizing the consistency window and the weight ratio and the number of the restoration sequence sets; calculating to complete the multichannel image, and synthesizing the multichannel image; restoring turbulent video output or storage.
Further, in the static mode, the main purpose of the process is to address disturbances caused by turbulence to objects that are not moving, such as blurring or jitter. At this time, the background image is restored in consideration of the maximum correlation between images.
The static mode more specifically includes the following steps:
dividing the preprocessed image into sub-window images with the radius of r and s, wherein the window size is as follows: (2r+1) × (2s+1), wherein 0.ltoreq.r.ltoreq.w-1/2, 0.ltoreq.s.ltoreq.h-1/2, w, h being the width and height of the pre-treatment image;
the sub-window images are called consistent window images, the collection of the sub-window images forms a preprocessing image, the preprocessing image is degraded into single pixel points when r=0 and s=0, and the preprocessing image is degraded into the whole image when r=w-1/2 and s=h-1/2;
will preprocess the image sequence n 1 ,n 2 .....n i Each image is divided into window sizes:
(2r+1) × (2s+1), a sequence of sets of h-consistent sub-window images, for each consistent sub-window image, the consistency and correlation characteristics E thereof are counted, e.g., the average or median E of R pixels over each sub-window image size range for each frame image is calculated iRh Average or median E of G pixels iGh Average or median E of B pixels iBh Wherein subscripts R, G, B represent image channels, i represents each frame of image of the image sequence, h represents the number of sub-windows of each frame of image, and the statistical result E is calculated iRh ,E iGh ,E iBh Data as a consistent sub-window image set;
considering that the video to be restored is affected by the previous continuous video differently in the correlation of the continuous video images, the continuous preprocessing image sequence n 1 ,n 2 .....n i In the case of n i For turbulent video to be restored, except n i Distance n i The closer the pair of images n i The larger the image composition influence is, the weight value is introduced as a i ≥a i-1 As n i Previous image pair n i Is a factor of influence of (1); wherein one image a j For n i The influence is constituted as follows:
Figure BDA0004252031110000041
it can be seen that the weight of the image sequence is sequentially increased due to the continuous increase of the image sequence, the weight of the adjacent image of the current image is larger, and the weight of the image farther from the current image is smaller;
under the condition of introducing weight ratio, calculating the weight consistency correlation of each frame of image, each consistency window image and each separation channel image, such as a weight ratio mean value; the turbulence image is to be restored
Figure BDA0004252031110000042
Then calculate the sub-window to be restoredAnd combining the sub-window channel data according to the R, G and B channel values, and arranging the R, G and B channel data according to an image data format to obtain a restored turbulence image.
According to the method, in static scene application, according to the principle of maximum similarity of continuous scene images, the average value of sub-windows of an image sequence is calculated by utilizing the correlation of image channels corresponding to obtained consistent window images of continuous multi-frame videos, and the sub-windows corresponding to each frame of the image sequence are summed according to weight values by utilizing weight ratio, so that a static scene turbulence image is finally formed.
Further, the dynamic mode more specifically includes the steps of:
in the dynamic mode processing, the main purpose of the processing is to restore the moving object by considering the difference between images on the basis of considering the correlation between images aiming at the motion distortion and disturbance caused by turbulence to the moving object;
if the correlation between images is too large, the recovery of a moving object is inhibited, the moving object is possibly lost, and if the correlation between images is too small, the turbulence is not obviously inhibited, so that the internal difference of the images to be recovered needs to be counted, and different correlation characteristics are adopted in sub-window areas with different genres of different channels of the images.
Defining a sub-window image on the preprocessed image, wherein the radius is R, S, and the window size is as follows:
(2 R+1) × (2 S+1), wherein r.ltoreq.R.ltoreq.w-1/2, s.ltoreq.S.ltoreq.h-1/2;
when the consistency sub-window and the difference sub-window have the same center, the differences of the consistency sub-window and the difference window of each frame of preprocessed image, such as variance or standard deviation, image mean after high-pass filtering and the like, are sequentially calculated. Finding the maximum value delta of each differential sub-window in each frame of image max And a minimum value delta min
By using its maximum value delta max And a minimum value delta min Normalizing standard deviation, i.e. delta pN =(δ pmin )/(δ maxmin ) Wherein delta p For the variability of the variability sub-window, delta pN Normalizing the variability for the variability sub-window;
based on different values, the channel of each image adopts different numbers of continuous frame sequences and different weight values to recover the consistency window in different consistency sub-windows by utilizing the normalized difference; the strategy is as follows: if the difference is too large, selecting more images before the image to be restored to restore the image, otherwise, keeping the set weight value as a i ≥a i-1 Wherein i is equal to or greater than 2;
and calculating the weight consistency correlation of the images of each separation channel of each consistency window image of each frame of image, wherein the image set formed by the sub-windows is a turbulence image to be restored, and thus a restored turbulence image is obtained.
And under the difference sub-window, calculating the difference of each separation channel of each consistency window image of the image to be restored in the difference window. And different strategies can be adopted to restore the consistency window image according to the characteristics of the image to be restored by utilizing the difference, and finally the image with the moving object is restored.
By using the method, in dynamic scene application, according to the principles of continuous scene image similarity and image self-difference, a strategy of selecting different recovery consistency sub-windows according to difference windows is utilized, and finally, a dynamic scene turbulence image is formed.
A processing device for video image turbulence suppression comprises a file and video stream input module, a turbulence processing server and a file and video stream output module;
the file and video stream input module is used for inputting videos to the turbulence processing server, the turbulence processing server processes the videos according to the video image turbulence suppression processing method, and the video stream output module is used for receiving the videos processed by the turbulence processing server.
A video processing device comprises a bus, a transceiver, a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the transceiver, the memory and the processor are connected through the bus, and the computer program realizes steps in a processing method for inhibiting video image turbulence when being executed by the processor.
Compared with the prior art, the invention has the beneficial effects that: the turbulent video image suppression method based on the solution has the advantages that the time consumption of the processing process is short, the computational power requirement on a computing platform is low, the deployment environment is simple, the landing cost is low, the real-time processing requirement of multiple paths of videos can be met, the application range is greatly enhanced, the essence is that the consistency and the difference of multiple frames of video images are counted, and the method has strong application in the aspects of coping with turbulent suppression and timely processing;
based on the beneficial effects created by the invention, a complete processing means is constructed for the aspect of turbulent video image suppression, the reliable application and analysis of turbulent video image suppression are realized, the efficiency of the video image application field of enterprises is effectively improved, and meanwhile, the research and development cost is reduced;
based on the achievement of the invention, two processing scenes of a turbulent image static mode and a dynamic mode are constructed, the invention can be effectively applied to different observation occasions, and the two scenes can be randomly switched for processing. Based on the consistency characteristics and the difference lines of the continuous images, the expected restored video is obtained in static scene processing and dynamic image processing turbulence scenes;
the method can design the weight ratio of various types of images according to actual needs to dynamically adjust the weight ratio of continuous turbulent multiple frames, especially in the real-time turbulent video processing, and is particularly suitable for image restoration of real-time turbulent video.
Drawings
FIG. 1 is a flow chart of a static mode;
FIG. 2 is a flow chart of a dynamic mode;
FIG. 3 is a schematic diagram of a processing apparatus for video image turbulence suppression;
the left side of fig. 4 is the original turbulence video, and the right side is the turbulence video after static mode processing;
FIG. 5 is an original turbulence video on the upper side, and a turbulence video after static mode processing on the lower side;
the left side of fig. 6 is the original turbulence video, and the right side is the turbulence video after dynamic mode processing.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Embodiments of the present invention will now be described with reference to the turbulent video of FIGS. 4-6;
1. preprocessing the acquired turbulence video image, including basic image stabilization, image noise removal and other operations, so as to remove the interference of noise, jitter and the like on an image picture in the shooting process;
2. based on different application purposes of the turbulence video image, for example, observing objects in a static state in the turbulence image, such as objects in a static state of buildings, roads and the like, processing the turbulence video into a static mode; or observing the moving object in the turbulence image, such as the moving automobile, the moving pedestrian and other objects with displacement change, and processing the turbulence video into a dynamic mode;
3. in static mode, 100 frames of turbulent video image frame N is acquired i ,i∈[1,100]Each channel has a pixel size of 1920×1080, i.e. w=1920, h=1080;
4. will then N i The images undergo channel separation, e.g. R, G, B channels, denoted N iR ,N iG ,N iB Each channel of the mean image is a matrix single-channel image with the column being 1920 and the behavior being 1080;
5. defining a consistency sub-window image with radius of r and s, wherein the image satisfies that r is more than or equal to 0 and less than or equal to (w-1)/2, s is more than or equal to 0 and less than or equal to (h-1)/2, namely r is more than or equal to 0 and less than or equal to 959,0 and s is more than or equal to 539;
6. within the above range of r and s, the consistency of the consistency sub-window images is calculated, for example, the mean and median are calculated, and the implementation takes the calculated mean as an example, then for each frame of turbulence video N i ,i∈[1,100]Calculate N iR ,N iG ,N iB The average value of pixels in the size range of the consistency window of three channels until the whole image is completedSequence calculation, wherein the calculation result of each frame of image is a matrix;
7. if let r=s=0, the correlation consistency window is 1, and the average value of the value of each pixel point of each frame of turbulence image is calculated and is the pixel value itself;
8. introducing a weight ratio a between successive multi-frame images i =a i-1 +1, where 2.ltoreq.i.ltoreq.100, a 1 =1; then the average value E of pixel points of the R, G and B channels is calculated R ,E G ,E B When introducing a i I.e.
Figure BDA0004252031110000081
Wherein c is E [ R, G, B]Representing sequentially calculating the average value of pixel points of three channels, wherein p represents the pixel value of a specific point in an image, w and h represent the columns and rows of an image matrix, and representing sequentially calculating the weight average value of each pixel point;
9. for the above steps, the average value of each calculated point weight is stored in a corresponding matrix and marked as Y c Wherein c is [ R, G, B ]];
10. Because the pixel value of each point of R, G and B channels of 100 frames of image data is subjected to the averaging treatment, Y is c The result of (2) is still an image, which is the restored turbulence data; y is set to c The R, G and B three-channel data are stored in an image display mode, so that restored turbulence video images are obtained;
11. when an observer needs to observe a moving object in a turbulent video, taking fig. 4 as an example and taking the range in a middle circle as the moving object when a dynamic mode is adopted; the pixel comprises three channels R, G and B, wherein the pixel size of each channel is 1920 x 1080;
12. defining a difference sub-window image, wherein the radius is R, and the size of S can wrap the size of the consistency sub-window, in this embodiment, the size of the consistency sub-window is 1, so that R, S is only greater than 1, and the radius is r=s=7;
13. taking the step 6 as a consistency window, calculating the image to be restored, taking the differential sub-window as a radius, calculating the differential value, and taking the standard deviation as an example, marking as delta p
14. The size of the consistency sub-window is degraded to be the pixel size, so 1920 x 1080 difference sub-windows are sequentially obtained, the center point of the consistency sub-window is the standard deviation of the pixel value size within the range of 7 of the size radius, and the maximum value delta of the standard deviation is obtained max And a minimum value delta min
15. Normalizing standard deviation, i.e. delta pN =(δ pmin )/(δ maxmin ) Wherein delta p The standard deviation delta of the pixel points pN Normalizing the standard deviation for the pixel at the location;
16. according to the normalized standard deviation, the number of the sequence sets of the continuous images related to the consistency is calculated respectively, namely if the normalized standard deviation is larger, the number of the continuous sequence sets is larger, namely the number of the sequence sets is C=1+delta pN (255.0/μ), where μ is an settable external parameter that can be used to manually adjust the sequence set size;
17. for each consistency window size, the number C of the corresponding sequence sets corresponds to the corresponding sequence sets;
18. in order to embody the difference of the moving targets, a weight ratio a between multi-frame images is set i =(i+1) 2 The weight ratio introducing condition is satisfied;
19. according to step 7
Figure BDA0004252031110000091
The turbulence times c are the number of the sequence sets calculated in the step 15, and the average value of the pixel size in the size range of the consistency sub-window of the current channel is calculated;
20. repeating the above process for the three channels R, G and B of the image respectively to obtain a restored image of the turbulent video.
According to the content of the turbulence image and according to the difference of observed targets in practical use, the turbulence image is divided into a static mode and a dynamic mode for processing respectively.
The method comprises the steps of introducing a consistency sub-window concept aiming at static mode processing, dividing an image to be restored, introducing average weight to perform average correction aiming at a plurality of continuous video images, achieving stable video background content of the images, and maintaining correlation among video sequences by the average weight.
Aiming at dynamic image processing, a differential sub-window concept is introduced, and the method utilizes the difference size of a consistency sub-window in each differential sub-window of a frame image to be restored to obtain the number of consistent correlation image sequence sets of the restored image, and achieves the purpose and the requirement of restoring the dynamic image under the processing of different weight ratios.
According to the static mode and the dynamic mode, different statistical methods are adopted for processing, clear images of turbulent videos under different conditions can be restored stably, the implementation and deployment introduction are convenient, and the method can be practically applied to scenes needing turbulent processing.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that modifications and variations can be made without departing from the technical principles of the present invention, and these modifications and variations should also be regarded as the scope of the invention.

Claims (7)

1. A method of processing video image turbulence suppression, comprising the steps of:
acquiring a turbulence video file or a continuous image sequence;
pretreatment;
separating the image channels, and separating the multi-channel image into single-channel images for respective processing;
when no displacement is observed in the turbulence image and an object in a static state is processed, the processing of the image is in a static mode;
when a displacement is observed in the turbulence image and an object in a dynamic state is processed, the processing of the image is in a dynamic mode.
2. A method of processing video image turbulence suppression as recited in claim 1 wherein the static mode comprises the steps of: dividing each separated image channel into consistency sub-windows; consistency is obtained for images in the sub-window; introducing an image sequence set weight ratio; calculating an image to be restored by using the consistency window and the weight ratio; calculating to complete the multichannel image, and synthesizing the multichannel image; restoring turbulent video output or storage.
3. A method of processing video image turbulence suppression as recited in claim 2 wherein the dynamic mode comprises the steps of: dividing each separated image channel into consistency sub-windows; consistency is obtained for images in the sub-window; introducing a difference sub-window to calculate the difference, and calculating the number of the restored sequence sets; calculating an image to be restored by utilizing the consistency window and the weight ratio and the number of the restoration sequence sets; calculating to complete the multichannel image, and synthesizing the multichannel image; restoring turbulent video output or storage.
4. A method of processing video image turbulence suppression according to claim 3, characterized in that the static mode more specifically comprises the steps of:
dividing the preprocessed image into sub-window images with the radius of r and s, wherein the window size is as follows: (2r+1) x (2s+1), wherein r is more than or equal to 0 and less than or equal to (w-1) 2, s is more than or equal to 0 and less than or equal to (h-1) 2, and w and h are the width and the height of the pretreated image;
the sub-window images are called consistent window images, the collection of which forms a preprocessing image, and when r=0, s=0, the preprocessing image is degraded into single pixel points, and when r=w-12, s=h-12, the preprocessing image is degraded into the whole image;
will preprocess the image sequence n 1 ,n 2 .....n i Each image is divided into window sizes:
(2r+1) × (2s+1), the number of which is a sequence of h consistency sub-window image sets, and counting the consistency and correlation characteristics E of each consistency sub-window image;
considering that the video to be restored is affected by the previous continuous video differently in the correlation of the continuous video images, the continuous preprocessing image sequence n 1 ,n 2 .....n i In the case of n i For turbulent video to be restored, except n i Distance n i The closer the pair of images n i The larger the image composition influence is, the weight value is introduced as a i ≥a i-1 As n i Previous image pair n i Is a factor of influence of (1); wherein one image a j For n i The influence is constituted as follows:
Figure FDA0004252031100000021
it can be seen that the weight of the image sequence is sequentially increased due to the continuous increase of the image sequence, the weight of the adjacent image of the current image is larger, and the weight of the image farther from the current image is smaller;
under the condition of introducing weight ratio, calculating the weight consistency correlation of the images of each separation channel of each consistency window image of each frame of image; the turbulence image is to be restored
Figure FDA0004252031100000022
Calculating the R, G and B channel values of the sub-window to be restored, merging the sub-window channel data, and arranging the R, G and B channel data according to the image data format to obtain the restored turbulence image.
5. A method of processing video image turbulence suppression according to claim 4, characterized in that the dynamic mode more specifically comprises the steps of:
defining a sub-window image on the preprocessed image, wherein the radius is R, S, and the window size is as follows:
(2 R+1) × (2 S+1), wherein r.ltoreq.R.ltoreq.w-1.ltoreq.2, s.ltoreq.S.ltoreq.h-1.ltoreq.2;
when the consistency sub-window and the difference sub-window have the same center, the difference of the consistency sub-window and the difference window of each frame of preprocessed image is calculated in sequence; finding the maximum value delta of each differential sub-window in each frame of image max And a minimum value delta min
By using its maximum value delta max And a minimum value delta min Normalizing standard deviation, i.e. delta pN =(δ pmin )/(δ maxmin ) Wherein delta p For the variability of the variability sub-window, delta pN For the purpose ofNormalizing the difference by a difference sub-window;
based on different values, the channel of each image adopts different numbers of continuous frame sequences and different weight values to recover the consistency window in different consistency sub-windows by utilizing the normalized difference; the strategy is as follows: if the difference is too large, selecting more images before the image to be restored to restore the image, otherwise, keeping the set weight value as a i ≥a i-1 Wherein i is equal to or greater than 2;
and calculating the weight consistency correlation of the images of each separation channel of each consistency window image of each frame of image, wherein the image set formed by the sub-windows is a turbulence image to be restored, and thus a restored turbulence image is obtained.
6. The processing device for video image turbulence suppression is characterized by comprising a file and video stream input module, a turbulence processing server and a file and video stream output module;
the file and video stream input module is used for inputting video to a turbulence processing server, the turbulence processing server processes the video according to the method of any one of claims 1-5, and the video stream output module is used for receiving the video processed by the turbulence processing server.
7. A video processing device comprising a bus, a transceiver, a memory, a processor and a computer program stored on the memory and executable on the processor, the transceiver, the memory and the processor being connected by the bus, characterized in that the computer program when executed by the processor implements the steps of the method according to any of claims 1-5.
CN202310609025.0A 2023-05-29 2023-05-29 Video image turbulence suppression processing method and device and video processing equipment Active CN116402722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310609025.0A CN116402722B (en) 2023-05-29 2023-05-29 Video image turbulence suppression processing method and device and video processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310609025.0A CN116402722B (en) 2023-05-29 2023-05-29 Video image turbulence suppression processing method and device and video processing equipment

Publications (2)

Publication Number Publication Date
CN116402722A true CN116402722A (en) 2023-07-07
CN116402722B CN116402722B (en) 2023-08-22

Family

ID=87014428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310609025.0A Active CN116402722B (en) 2023-05-29 2023-05-29 Video image turbulence suppression processing method and device and video processing equipment

Country Status (1)

Country Link
CN (1) CN116402722B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202693258U (en) * 2012-07-19 2013-01-23 华中科技大学 Imaging system for non-contact measurement of oceanic turbulence parameters
CN103281525A (en) * 2011-12-15 2013-09-04 弗莱克斯电子有限责任公司 Networked image/video processing system for enhancing photos and videos
CN105739091A (en) * 2016-03-16 2016-07-06 中国人民解放军国防科学技术大学 Imaging method capable of weakening atmospheric turbulence effect and device thereof
CN115358953A (en) * 2022-10-21 2022-11-18 长沙超创电子科技有限公司 Turbulence removing method based on image registration and dynamic target fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281525A (en) * 2011-12-15 2013-09-04 弗莱克斯电子有限责任公司 Networked image/video processing system for enhancing photos and videos
CN202693258U (en) * 2012-07-19 2013-01-23 华中科技大学 Imaging system for non-contact measurement of oceanic turbulence parameters
CN105739091A (en) * 2016-03-16 2016-07-06 中国人民解放军国防科学技术大学 Imaging method capable of weakening atmospheric turbulence effect and device thereof
CN115358953A (en) * 2022-10-21 2022-11-18 长沙超创电子科技有限公司 Turbulence removing method based on image registration and dynamic target fusion

Also Published As

Publication number Publication date
CN116402722B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN110570371B (en) Image defogging method based on multi-scale residual error learning
CN110443761B (en) Single image rain removing method based on multi-scale aggregation characteristics
CN109493300B (en) Aerial image real-time defogging method based on FPGA (field programmable Gate array) convolutional neural network and unmanned aerial vehicle
CN109685045B (en) Moving target video tracking method and system
CN111325051B (en) Face recognition method and device based on face image ROI selection
CN112669344A (en) Method and device for positioning moving object, electronic equipment and storage medium
CN108229346B (en) Video summarization using signed foreground extraction and fusion
CN106056624A (en) Unmanned aerial vehicle high-definition image small target detecting and tracking system and detecting and tracking method thereof
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN112308087B (en) Integrated imaging identification method based on dynamic vision sensor
CN114821449B (en) License plate image processing method based on attention mechanism
CN114463218A (en) Event data driven video deblurring method
CN113284070A (en) Non-uniform fog image defogging algorithm based on attention transfer mechanism
CN104281999A (en) Single image defogging method based on structural information
CN110335210B (en) Underwater image restoration method
CN111882581A (en) Multi-target tracking method for depth feature association
CN109784215B (en) In-vivo detection method and system based on improved optical flow method
CN116402722B (en) Video image turbulence suppression processing method and device and video processing equipment
CN112132870B (en) Early smoke detection method for forest fire
CN117635649A (en) Landslide monitoring method and system
CN115984124A (en) Method and device for de-noising and super-resolution of neuromorphic pulse signals
CN105913395A (en) Moving object observation and fuzzy restoration method
CN115760640A (en) Coal mine low-illumination image enhancement method based on noise-containing Retinex model
Wang et al. Fast visibility restoration using a single degradation image in scattering media
CN114612305A (en) Event-driven video super-resolution method based on stereogram modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant