CN114841899B - Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment - Google Patents

Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment Download PDF

Info

Publication number
CN114841899B
CN114841899B CN202210763142.8A CN202210763142A CN114841899B CN 114841899 B CN114841899 B CN 114841899B CN 202210763142 A CN202210763142 A CN 202210763142A CN 114841899 B CN114841899 B CN 114841899B
Authority
CN
China
Prior art keywords
coding
sequence
space
information
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210763142.8A
Other languages
Chinese (zh)
Other versions
CN114841899A (en
Inventor
蔡李靖
陈林森
字崇德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhipu Technology Co ltd
Original Assignee
Nanjing Zhipu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhipu Technology Co ltd filed Critical Nanjing Zhipu Technology Co ltd
Priority to CN202210763142.8A priority Critical patent/CN114841899B/en
Publication of CN114841899A publication Critical patent/CN114841899A/en
Application granted granted Critical
Publication of CN114841899B publication Critical patent/CN114841899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The application provides a method for removing infrared image striations by space-time-frequency combined compact coding and an infrared device. The method comprises the following steps: determining line mean value information aiming at the acquired infrared image; performing time sequence coding to determine a time sequence coded primary coding information sequence; performing space-domain coding to determine a primary coded information sequence after space-domain coding; performing space-time-frequency joint compact coding to determine a secondary coding information sequence after the space-time-frequency joint compact coding; and carrying out coding fusion on the obtained infrared image to determine a coded and fused infrared image, wherein the coded and fused infrared image is an infrared image with the striations removed. The method comprehensively considers the characteristic that a target visual angle of the mobile infrared imaging equipment moves along with time, makes full use of time sequence, space domain and frequency domain information, and removes flickering stripe noise in the infrared image through space-time-frequency combined compact coding.

Description

Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method for removing infrared image transverse striations by space-time-frequency joint compact coding and infrared equipment.
Background
In recent years, accidents of leakage of production media frequently occur in the field of chemical production, so that the consequences of explosion, fire and the like are caused, and the life and property safety of people is seriously threatened. Therefore, the demand for popularizing and implementing the industrial safety monitoring technology in the field of chemical production is increasingly urgent.
Monitoring of medium gas leaks using infrared imaging devices is one of the most advanced methods. Most dielectric gases are characterized significantly in the infrared band compared to ambient air. When a wide spectral range infrared imaging device is used to monitor a production pipeline, a black gas cloud is shown in the processed infrared image as gushing out of the pipeline once a medium gas cloud leak occurs.
By using the portable or mobile infrared imaging equipment with wide spectral range, whether medium gas leakage exists in pipelines at different positions of a production field can be monitored in a mobile inspection mode.
Infrared imaging devices mostly use uncooled infrared detectors. At present, most uncooled infrared detectors adopt an integral capacitor to collect infrared light signals, and the infrared images after imaging have whole-row or whole-column jitters, such as flashing transverse striations, caused by environmental temperature changes or device factors.
At present, the stripe noise is usually removed by using time sequence, airspace, frequency domain, deep learning, black box and other modes. However, the time-series denoising may introduce information that does not belong to the image of the current frame, resulting in a ghost phenomenon. However, with spatial and frequency domain denoising, some detail distortion of the image may result. The denoising method of deep learning is difficult to master due to the action principle and low robustness, and is difficult to process infrared images acquired under different scenes.
Disclosure of Invention
In order to solve the problems, the application provides a method and a device for removing infrared image striations by space-time-frequency joint compact coding and an infrared device, so as to remove striation noise in an infrared image.
In a first aspect, the present application provides a method for removing infrared image horizontal stripes by space-time-frequency joint compact coding, including:
determining line mean value information aiming at the acquired infrared image;
performing time sequence coding according to the determined line mean value information and a predetermined time sequence coding sequence to determine a primary coding information sequence after the time sequence coding;
performing space-domain coding according to the determined line mean information and a predetermined space-domain coding sequence to determine a primary coding information sequence after the space-domain coding;
performing space-time-frequency joint compact coding according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding and a predetermined frequency domain coding coefficient set to determine a secondary coding information sequence after the space-time-frequency joint compact coding;
and according to the secondary coding information sequence after the space-time-frequency joint compact coding and the determined line mean value information, performing coding fusion on the obtained infrared image to determine a coded and fused infrared image, wherein the coded and fused infrared image is an infrared image with transverse striations removed.
Further, the performing time-series coding according to the determined line mean information and a predetermined time-series coding sequence to determine a time-series coded primary coding information sequence includes:
weighting the line mean value of the corresponding line of each reference frame image recorded in the line mean value information reference queue and the time sequence coding information corresponding to each reference frame image respectively to obtain a primary coding information sequence after time sequence coding aiming at the acquired infrared image;
the number of the row mean value information sequences recorded in the row mean value information reference queue is a predetermined time sequence reference frame number, and the number of the time sequence coding information recorded in the time sequence coding sequence is the predetermined time sequence reference frame number;
the row mean value information sequence recorded in the row mean value information reference queue comprises a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to the acquired infrared image and a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to other time sequence reference frame images, and the other time sequence reference frame images are acquired in time sequence before the acquired infrared image.
Further, according to the determined line mean information and a predetermined spatial coding sequence, performing spatial coding to determine a primary coded information sequence after spatial coding, including:
dividing the acquired infrared image into 3 mutually non-overlapping first parts, second parts and third parts according to the sequence of rows;
performing space domain coding on each line in the first part by adopting a first space domain coding mode;
performing spatial coding on each line in the third part by adopting a third spatial coding mode;
performing space-domain coding on each line in the second part by adopting a first space-domain coding mode and a third space-domain coding method;
according to the determined line mean information and a predetermined spatial coding sequence, respectively carrying out spatial coding on each line in the first part, wherein the spatial coding comprises the following steps:
weighting the line mean value information of the total lineparam/2 lines, which are adjacent to and behind the corresponding line, in the obtained infrared image and each space domain coding information in the space domain coding sequence respectively to obtain a space domain coded primary coding information sequence corresponding to the corresponding line of the obtained infrared image;
respectively carrying out space-domain coding on each line in the third part according to the determined line mean information and a predetermined space-domain coding sequence, wherein the space-domain coding comprises the following steps:
weighting the line mean value information of the total lineparam/2 lines, which are adjacent to and before the corresponding line, in the obtained infrared image and each space domain coding information in the space domain coding sequence respectively to obtain a space domain coded primary coding information sequence corresponding to the corresponding line of the obtained infrared image, wherein,
the linearam is a space coding parameter, the value of the linearam is not larger than the total line number of the obtained infrared image, and the spatial coding sequence comprises linearam/2 spatial coding information.
Further, the performing space-time-frequency joint compact coding according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding, and a predetermined frequency-domain coding coefficient set to determine a secondary coding information sequence after the space-time-frequency joint compact coding includes:
determining a primary coding information sequence after space-time coding according to the determined primary coding information sequence after space-domain coding and the determined primary coding information sequence after time-sequence coding;
and performing space-time-frequency joint compact coding according to the primary coding information sequence subjected to space-time coding and a predetermined frequency domain coding coefficient group to determine a secondary coding information sequence subjected to space-time-frequency joint compact coding.
Further, according to the secondary coding information after the space-time-frequency joint compact coding and the determined line mean value information, the obtained infrared image is coded and fused to determine a coded and fused infrared image, which includes:
determining a compensation value of each corresponding line according to the secondary coding information after the space-time-frequency joint compact coding and the determined line mean value information;
and compensating the characteristic values of the pixel points of the columns in the corresponding rows by using the compensation values of the corresponding rows to determine the infrared image after the coding fusion.
Further, still include:
acquiring a predetermined time sequence weight coefficient and a predetermined time sequence reference frame number;
and generating time sequence weights respectively corresponding to the time sequence reference frames according to the time sequence weight coefficient and the time sequence reference frame number, wherein the time sequence reference frames are respectively equal to or earlier than the acquired infrared images in time sequence, and the time sequence weights respectively corresponding to the time sequence reference frames are decreased progressively along with the increase of the time sequence reference frame interval from the acquired infrared images in time sequence.
Further, still include:
and generating the spatial domain coding sequence comprising lineparam/2 spatial domain coding information according to the spatial coding weight coefficient and the spatial coding parameters, wherein the spatial domain coding information is corresponding to spatial weights corresponding to reference lines of the current line of the obtained infrared image in the line sequence, and the spatial weights corresponding to the reference lines respectively decrease with the increase of the intervals between the reference lines and the current line in the line sequence.
Further, the predetermined set of frequency domain coding coefficients comprises a number of coding coefficients that is 2 times the coding width;
after the obtained infrared image is coded and fused, the method further comprises the following steps:
performing sliding convolution on the determined coded and fused infrared image and the determined coded and fused infrared image by adopting a multi-grid edge extraction template to obtain an edge-enhanced infrared image;
carrying out contrast correction processing on the determined infrared image subjected to coding fusion to obtain an infrared image subjected to contrast correction processing;
and weighting the infrared image after the edge enhancement processing and the infrared image after the contrast correction processing to obtain the infrared image after the cross grain removal.
In a second aspect, the present application provides an infrared image striation removal device, comprising:
the space-time-frequency joint compact coding unit is used for determining line mean value information aiming at the acquired infrared image;
performing time sequence coding according to the determined line mean value information and a predetermined time sequence coding sequence to determine a primary coding information sequence after the time sequence coding;
performing space-domain coding according to the determined line mean information and a predetermined space-domain coding sequence to determine a primary coding information sequence after the space-domain coding;
performing space-time-frequency joint compact coding according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding and a predetermined frequency domain coding coefficient set to determine a secondary coding information sequence after the space-time-frequency joint compact coding;
and the coding fusion unit is used for coding and fusing the acquired infrared image according to the secondary coding information sequence subjected to the space-time-frequency joint compact coding and the determined line mean value information so as to determine a coded and fused infrared image, wherein the coded and fused infrared image is an infrared image subjected to cross grain removal.
In a third aspect, the present application provides a mobile infrared device comprising:
the infrared detector is used for movably acquiring an infrared image;
the infrared image striation removing device according to the second aspect, which is used for removing striations of the infrared image acquired by the infrared detector.
In summary, the method, the device and the infrared device for removing the infrared image horizontal stripes through the space-time-frequency joint compact coding provided by the invention comprehensively consider the characteristic that the target view angle of the mobile infrared imaging device moves along with time, make full use of time sequence, space domain and frequency domain information, remove the flickering stripe noise in the infrared image through the space-time-frequency joint compact coding, avoid the introduction of ghost phenomenon and avoid the phenomenon of detail distortion, and enhance the details of the image through image operation.
Drawings
FIG. 1 is a schematic flow chart of a method for removing infrared image horizontal stripes by space-time-frequency joint compact coding according to an embodiment of the present application;
fig. 2 is a schematic composition diagram of an infrared image striation removal device and a mobile infrared apparatus according to an embodiment of the present application;
FIG. 3A is a schematic flowchart of a process for programming and implementing a method for removing infrared image horizontal stripes by space-time-frequency joint compact coding according to an embodiment of the present application;
FIG. 3B is a schematic flow chart of the programming implementation in FIG. 3A for generating a spatial domain coding sequence lineattern;
FIG. 3C is a schematic flow chart of the programming implementation in FIG. 3A for generating a time-series code sequence, timeattern;
FIG. 3D is a schematic flow chart of a first portion of the process for generating a primary coded information sequence, linear _ weight, when the programming in FIG. 3A is implemented;
FIG. 3E is a second partial flowchart illustrating the generation of a primary encoded information sequence linear _ weight when implemented in the programming of FIG. 3A;
FIG. 3F is a schematic flow chart of the programming of FIG. 3A for implementing space-time-frequency joint compact coding;
FIG. 3G is a schematic flow chart illustrating the process of encoding and fusing the current frame infrared image by programming in FIG. 3A;
fig. 4 is a schematic diagram of a 9-grid edge extraction template in the method for removing the infrared image striations through space-time-frequency joint compact coding according to the embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention. In addition, technical features of various embodiments or individual embodiments provided by the present invention may be arbitrarily combined with each other to form a feasible technical solution, and such combination is not limited by the sequence of steps and/or the structural composition mode, but must be realized by a person skilled in the art, and when the technical solution combination is contradictory or cannot be realized, such a technical solution combination should not be considered to exist and is not within the protection scope of the present invention.
To accurately describe the technical contents in the present application and to accurately understand the present application, the terms used in the present specification are given the following explanations or definitions before the description of the specific embodiments.
In physics, all substances above absolute zero (0K, i.e., -273.15 ℃) can produce infrared light (as well as other types of electromagnetic waves). Infrared (IR) is an electromagnetic wave having a frequency between microwave and visible light, is a generic term for radiation having a frequency of 0.3THz to 400THz (corresponding to a wavelength of 1mm to 750nm in vacuum) in the electromagnetic spectrum, and is invisible light having a frequency lower than that of red light.
As shown in fig. 1, the method for removing infrared image horizontal stripes by space-time-frequency joint compact coding according to the embodiment of the present application includes the following steps:
s11: determining line mean value information aiming at the acquired infrared image;
performing time sequence coding according to the determined line mean value information and a predetermined time sequence coding sequence to determine a primary coding information sequence after the time sequence coding;
performing space-domain coding according to the determined line mean information and a predetermined space-domain coding sequence to determine a primary coding information sequence after the space-domain coding;
performing space-time-frequency joint compact coding according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding and a predetermined frequency domain coding coefficient set to determine a secondary coding information sequence after the space-time-frequency joint compact coding;
s12: according to the secondary coding information sequence after the space-time-frequency joint compact coding and the determined line mean value information, the obtained infrared image is subjected to coding fusion to determine a coded and fused infrared image, and the coded and fused infrared image is an infrared image with transverse striations removed;
in the above step S11, the space-time-frequency joint compact coding is implemented to determine the secondary coded information sequence after the space-time-frequency joint compact coding, which is specifically referred to as step S301 to step S309 below. In the above, step S12 encodes and fuses the images, specifically, see step S310 described below. Therefore, the method for removing the infrared image cross grain by the space-time-frequency joint compact coding comprehensively considers the characteristic that the target visual angle of the mobile infrared imaging equipment moves along with the time, makes full use of the time sequence, the space domain and the frequency domain information, removes the flickering fringe noise in the infrared image through the space-time-frequency joint compact coding, and avoids the introduction of the ghost phenomenon.
In some embodiments, it may further include: performing edge enhancement processing on the determined infrared image subjected to coding fusion to obtain an infrared image subjected to edge enhancement processing;
carrying out contrast correction processing on the determined infrared image subjected to coding fusion to obtain an infrared image subjected to contrast correction processing;
and weighting the infrared image after the edge enhancement processing and the infrared image after the contrast correction processing to obtain the infrared image after the cross grain removal.
The above operation steps for implementing image enhancement refer to steps S311 to S317 described below. Therefore, the method for removing the infrared image horizontal stripes by the space-time-frequency joint compact coding can avoid the phenomenon of detail distortion and enhance the details of the image.
In some embodiments, said performing time-series encoding according to the determined line mean information and a predetermined time-series encoding sequence to determine a time-series encoded primary encoding information sequence includes:
weighting the line mean value of the corresponding line of each reference frame image recorded in the line mean value information reference queue and the time sequence coding information corresponding to each reference frame image respectively to obtain a primary coding information sequence after time sequence coding aiming at the acquired infrared image;
the number of the row mean value information sequences recorded in the row mean value information reference queue is a predetermined time sequence reference frame number, and the number of the time sequence coding information recorded in the time sequence coding sequence is the predetermined time sequence reference frame number;
the row mean value information sequence recorded in the row mean value information reference queue comprises a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to the acquired infrared image and a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to other time sequence reference frame images, and the other time sequence reference frame images are acquired in time sequence before the acquired infrared image. See specifically step S601 to step S607 described below.
In some embodiments, performing spatial coding according to the determined line mean information and a predetermined spatial coding sequence to determine a spatial-coded primary coded information sequence, includes:
dividing the acquired infrared image into 3 mutually non-overlapping first parts, second parts and third parts according to the sequence of rows;
performing space domain coding on each line in the first part by adopting a first space domain coding mode;
performing spatial coding on each line in the third part by adopting a third spatial coding mode;
performing space-domain coding on each line in the second part by adopting a first space-domain coding mode and a third space-domain coding method;
according to the determined line mean information and a predetermined spatial coding sequence, respectively carrying out spatial coding on each line in the first part, wherein the spatial coding comprises the following steps:
weighting the line mean information of the common lineparam/2 line which is adjacent to and behind the corresponding line in the obtained infrared image and each space domain coding information in the space domain coding sequence respectively to obtain a primary coding information sequence after space domain coding corresponding to the corresponding line of the obtained infrared image;
respectively carrying out space-domain coding on each line in the third part according to the determined line mean information and a predetermined space-domain coding sequence, wherein the space-domain coding comprises the following steps:
weighting the line mean information of a common lineparam/2 line which is adjacent to and before the corresponding line in the obtained infrared image and each space domain coding information in the space domain coding sequence respectively to obtain a primary coding information sequence after space domain coding corresponding to the corresponding line of the obtained infrared image, wherein,
the linearam is a space coding parameter, the value of the linearam is not larger than the total line number of the obtained infrared image, and the spatial coding sequence comprises linearam/2 spatial coding information. See in detail step S608 to step S625 described below.
In some embodiments, the performing space-time-frequency joint compact coding according to the determined space-time coded primary coding information sequence, the determined time-sequence coded primary coding information sequence, and a predetermined set of frequency-domain coding coefficients to determine a space-time-frequency joint compact coded secondary coding information sequence includes:
determining a primary coding information sequence after space-time coding according to the determined primary coding information sequence after space-domain coding and the determined primary coding information sequence after time-sequence coding;
and performing space-time-frequency joint compact coding according to the primary coding information sequence subjected to space-time coding and a predetermined frequency domain coding coefficient group to determine a secondary coding information sequence subjected to space-time-frequency joint compact coding. See specifically step S701 to step S713 described below.
In some embodiments, the encoding and fusing the acquired infrared image according to the secondary encoding information after the time-space-frequency joint compact encoding and the determined line mean information to determine an encoded and fused infrared image includes:
determining a compensation value of each corresponding line according to the secondary coding information after the space-time-frequency joint compact coding and the determined line mean value information;
and compensating the characteristic values of the pixel points in each row in each corresponding row by using the compensation values of each corresponding row to determine the infrared image after encoding fusion. See specifically steps S801 to S808 described below.
In some embodiments, further comprising:
acquiring a predetermined time sequence weight coefficient and a predetermined time sequence reference frame number;
and generating time sequence weights respectively corresponding to the time sequence reference frames according to the time sequence weight coefficient and the time sequence reference frame number, wherein the time sequence reference frames are respectively equal to or earlier than the acquired infrared images in time sequence, and the time sequence weights respectively corresponding to the time sequence reference frames are decreased progressively along with the increase of the time sequence reference frame interval from the acquired infrared images in time sequence. See specifically step S501 through step S507 described below.
In some embodiments, further comprising: and generating the spatial coding sequence comprising linearam/2 spatial coding information according to the spatial coding weight coefficient and the spatial coding parameters, wherein the spatial coding information corresponds to spatial weights corresponding to reference lines of the current line of the acquired infrared image in the line sequence, and the spatial weights corresponding to the reference lines respectively decrease with the increase of the interval between the reference lines and the current line in the line sequence. See specifically step S401 to step S406 described below.
In some embodiments, the predetermined set of frequency domain coding coefficients comprises a number of coding coefficients that is 2 times the coding width;
the performing the edge enhancement processing includes: adopting an edge extraction template with multiple grids to perform sliding convolution with the determined infrared image after the coding fusion so as to obtain an infrared image after edge enhancement processing;
the performing contrast correction processing includes: and gamma correction is carried out on the determined infrared image after the code fusion so as to obtain the infrared image after the contrast correction processing.
As shown in fig. 2, the infrared image striation removal device 200 according to the embodiment of the present application includes:
a space-time-frequency joint compact coding unit 210, configured to determine line mean information for the acquired infrared image;
performing time sequence coding according to the determined line mean value information and a predetermined time sequence coding sequence to determine a primary coding information sequence after the time sequence coding;
performing space-domain coding according to the determined line mean information and a predetermined space-domain coding sequence to determine a primary coding information sequence after the space-domain coding;
performing space-time-frequency joint compact coding according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding and a predetermined frequency domain coding coefficient set to determine a secondary coding information sequence after the space-time-frequency joint compact coding;
and the encoding fusion unit 220 is configured to perform encoding fusion on the acquired infrared image according to the secondary encoding information sequence after the space-time-frequency joint compact encoding and the determined line mean value information to determine an encoded and fused infrared image, where the encoded and fused infrared image is an infrared image with striations removed.
As shown in fig. 2, a mobile infrared imaging apparatus 1000 according to an embodiment of the present invention is provided with an infrared detector 300 for movably acquiring an infrared image and an infrared image streak removal device 200 described above, which is configured to remove streaks of the infrared image acquired by the infrared detector. The infrared detector 300 and the infrared image streak removal apparatus 200 may be respectively disposed at different geographical locations.
TABLE 1 variable comparison Table
Figure 800545DEST_PATH_IMAGE001
In one embodiment, when programming a program code to implement the method for removing horizontal streaks in an infrared image according to an embodiment of the present application, the program flow chart may include steps S301 to S117 shown in fig. 3A. For ease of reading, the following table 1 is provided as a control.
The following steps S301 to S117 are executed to sequentially process each frame image in chronological order of image generation. In processing each frame of image, the image information of the other frames of images is utilized. The following description will be given by taking an example of processing each frame image.
S301: setting a time sequence reference frame number smoothnum, setting a refresh frame number refresh _ frame, and setting a spatial coding parameter lineparam. Generally, the timing reference frame number smoothnum, the refresh frame number refresh _ frame, and the spatial coding parameter linepram are all even numbers.
In the above, step S301 is executed earliest in the infrared image processing, and the values of the parameters set in step S301 are used when each frame of infrared image is processed later. Step S301 may be performed only when the first frame image is processed.
S302: acquiring an Image and a frame number index corresponding to the Image, and hereinafter, called a current frame number; the Image is stretched to the storage bit depth of 8bit Image, and is recorded as Image _8bit, and is hereinafter referred to as the current frame Image.
And adjusting the value range of the characteristic value of each pixel point of the image to be 0-255 by stretching. The Image before stretching and the Image after stretching, Image _8bit, respectively comprise rows color pixels, that is, the pixels are arranged in rows and columns, and respectively have respective characteristic values. The characteristic value of each pixel point can be a gray value obtained and processed by the infrared detector or other characteristic values.
In the image processing process, step S302 is performed for each frame of image, for example, the value of the frame number index is 1 when the first frame of image is processed. And sequentially setting the frame numbers index corresponding to other processed frame images as other positive integers.
S303: and calculating a line mean value information sequence onelinemean _ pre of the stretched Image _8bit, and storing the calculated line mean value information into a line mean value information reference queue linemean _ vector. In the image processing, step S303 is executed when any frame image is processed.
The line mean information sequence is a sequence formed by respectively calculating an algebraic mean value from respective characteristic values of all columns of pixel points in each line of the image and forming the line mean information according to a line sequence.
The line mean information reference queue line _ vector corresponds to the time-series reference frame number smoothnum, is a two-dimensional array of rows smoothnum, and is used for recording a line mean information sequence of smoothnum-1 frame images before the current frame image and a line mean information sequence of the current frame image in each column sequentially from left to right. Or, the line mean information reference queue _ vector corresponds to a two-dimensional array of smoothnum rows, and is used for sequentially recording, from top to bottom, in each line, a line mean information sequence of smoothnum-1 frame images before the current frame image and a line mean information sequence of the current frame image.
In the following steps, after the obtained line mean value information sequence is stored into a line mean value information reference queue line _ vector, the line mean value information reference queue line _ vector is a smoothnum rows array arranged from top to bottom according to the frame number from small to large, the line mean _ vector (j, i) is the ith element of the jth row, the rows of the jth row record the line mean value information sequence of each row of the jth frame reference frame image respectively, wherein j is less than or equal to smoothnum, i is less than or equal to row, and the 1 st to smoothnum frame reference frame images are an othnum-1 frame image and a current frame image before the current frame image in sequence.
Specifically, at the time of initialization, the values of all elements in the line mean information reference queue line _ vector are set to zero. Then, from bottom to top, the row mean information refers to each row in the queue line _ vector, and is sequentially replaced by the calculated row mean information sequence onelinemean _ pre. When the number of the accumulated line mean information sequences onelinemean _ pre is equal to the timing reference frame number smoothnum, all lines in the line mean information reference queue linemean _ vector are updated. Subsequently, as the frame number of the current frame image increases, the rows in the row mean information reference queue line _ vector sequentially slide upward, so that the numerical values of the elements in the last row are replaced by the row mean information sequence onelinemean _ pre of the current frame image obtained by calculation, and the scale of the row mean information reference queue line _ vector is kept unchanged as smoothnum rows.
In this way, according to the bottom-up rule, the lowermost row of the row average information reference queue corresponds to the newly calculated row average information sequence (i.e., the row average information sequence of the current frame image), and the row average information sequences of the other reference frame images are arranged sequentially upward from near to far in time series from the current frame image.
S304: and judging whether the current frame number meets the condition that the current frame number is smaller than the time sequence reference frame number, namely whether the index < smoothnum is met. If so, step S305 is performed, otherwise step S307 is performed.
S305: and judging whether the current frame number is 1, namely whether the value of index is 1. If yes, step S306 is executed, otherwise step S311 is executed.
S306: generating a time sequence coding sequence timeprattern, generating a space domain coding sequence lineprattern, generating an edge extraction template edgepattern, and jumping to the step S311.
In the above, step S306 is executed earliest in the infrared image processing process. The values of the parameters determined in step S306 are used when processing each frame of the infrared image later (including the first frame image as well). The parameters determined in step S306 include a time sequence coding sequence timetable, a space domain coding sequence linetable, and an edge extraction template edgepattern. Step S306 may be performed only when the first frame image is processed.
S307: and calculating a primary coding information sequence linearity _ weight by using the line mean information reference queue linemean _ vector, the time sequence coding sequence timeattern and the space sequence lineattern.
S308: the line mean information reference queue line vector is updated.
Referring to step S303, as the current frame number increases, the rows in the row mean information reference queue linemean _ vector sequentially slide upward, so that the values of the elements in the last row are replaced by the calculated row mean information sequence onelinemean _ pre of the current frame image, that is, the row mean information sequence of the reference frame for the next frame image failure is removed from the row mean information reference queue linemean _ vector.
Step S308 is configured to keep the scale of the line mean information reference queue line _ vector as smooth _ rows, so that the lines from the 2 nd to the smoothen th in the line mean information reference queue line _ vector sequentially slide upwards, so that the value of each element in the last line is reset to zero again, so that when step S303 is executed to process the next frame image, the line mean information sequence onelinemean _ pre of the current frame image calculated in step S303 is stored in the last line of the line mean information reference queue line _ vector, and at this time, the current frame image is the first smoothen frame reference frame when the next frame image is processed.
S309: and carrying out secondary coding processing on the primary coding information sequence linear _ weight to obtain a secondary coding information sequence linear _ weight _ code.
S310: the Image _8bit stretched in step S302 is encoded and fused using the line mean value information sequence onelinemean _ pre calculated in step S303 and the secondary encoding information sequence linear _ weight _ code calculated in step S309.
S311: the image mean _ frame is calculated. In the above, the Image mean value of the Image _8bit stretched in step S302 is calculated for the first to smoothnum frame images. For other images with frame numbers equal to or greater than the chronological reference frame number smoothnum, the image mean of the images obtained by encoding fusion in step S310 is calculated.
S312: and judging whether the remainder after the quotient of the current frame number and the refresh frame number is 1, namely whether index% refresh _ frame = = 1 is met. If so, step S313 is performed, otherwise step S314 is performed.
S313: the standard mean is updated, for example, with the image mean _ frame calculated in step S311 as the standard mean stand _ mean.
The steps S312 and S313 are executed to periodically update the standard mean value stand _ mean at time intervals of the refresh frame number refresh _ frame. That is, the image mean value mean _ frame of the first frame (frame number 1) image and each frame image having frame number (P × refresh _ frame +1), where P is a natural number, is taken as the standard mean value stand _ mean in turn.
S314: the Image _8bit stretched in step S302 or the Image resulting from the encoding fusion in step S310 is processed with the standard mean value stand _ mean updated in step S313 when the first frame Image is processed or the Image with the frame number (P × refresh _ frame +1) is processed, the Image mean value mean _ frame determined in step S311, for example, such that Image _8bit = Image _8bit-mean _ frame + stand _ mean.
S315: the Image _8bit processed in step S314 is subjected to sliding convolution by using the edge extraction template edgepattern generated in step S306, so as to obtain an edge Image imagedge for enhancing details.
S316: gamma correction is performed on the Image _8bit processed in step S314, and a corrected Image _ gamma is obtained to increase the contrast.
As described above, step S315 and step S316 may be executed sequentially in the order described in the text, may be executed in parallel, or may be executed after the order is interchanged.
S317: and performing weighted fusion on the edge Image imagedge obtained in the step S315 and the corrected Image _ gamma obtained in the step S316 to obtain a noise-eliminated Image _ final.
In step S317, the weight for the edge image imgedge and the weight for the edge image imgedge used in the weighted fusion can be flexibly determined according to the edge of the image, the contrast of the image, and the like.
In some embodiments, as shown in fig. 3B, the generating the spatial coding sequence linepatrern in step S306 includes the following steps S401 to S406.
S401: initializing an iteration variable i = 1, setting the value of the spatial coding weight spatialweight (which ranges from 0 to 1, e.g., 0.2), and initializing an accumulation sum = 0.
S402: judging whether 1/2 that the iteration variable is less than or equal to the space coding parameter is met, namely, whether i is less than or equal to lineparam/2 is met; if so, S403 is performed, otherwise S406 is performed.
S403: generating the ith value of the spatial coding sequence, and spatial coding information linepatrern (i) = 1/i.
S404: update cumulative amount sum = sum + lineattern (i).
S405: update iteration variable i = i +1, and jump to step S402.
In the above steps S402 to S405, sum =is obtained in an accumulation manner
Figure 492557DEST_PATH_IMAGE002
In the above steps S402 to S405, the value of the ith spatial coding information linepatrern (i) in the spatial coding sequence linepatrern is sequentially determined in the iterative variable i accumulation manner. The spatial coding sequence comprises linearam/2 spatial coding information, and each spatial coding information is less than or equal to 1.
S406: update spatial coding sequence lineprattern = lineprattern (spatialweight/2)/sum.
In the above step S406, the values of linearam/2 pieces of spatial domain coding information in the spatial domain coding sequence linepratter are respectively subjected to the homogenization processing, and are multiplied by the spatial coding weight in advance.
Above, referring to step S403, the larger the value of i, the smaller the value of linepatrern (i). Referring to step S613, the farther the reference line is spatially away from the current line (e.g., the larger the difference between the line numbers), the larger the distance, and the smaller the corresponding weight of the reference line for determining the encoded information sequence in a weighted manner, that is, the distance is decreased gradually.
In some embodiments, as shown in fig. 3C, generating the time-series code sequence timetable in step S306 includes the following steps S501 to S507.
S501: setting an iteration variable i = 2, setting the value of the timing weight timeweight (the value ranges from 0 to 1, such as 0.3), and initializing the accumulation sum = 0.
S502: and judging whether the iteration variable is less than or equal to the time sequence reference frame number, namely, whether i < smoothnum is met, if so, executing the step S503, otherwise, executing the step S506.
S503: generating the ith value of the time sequence coding sequence, and generating time sequence coding information timeprattern (i) = 1/(i + 1).
S504: update accumulation sum = sum + timeattern (i).
S505: update iteration variable i = i +1, and jump to step S502.
S506: update time-sequential code sequence timetable = timetable: (1-timeweight-spatialweight)/sum.
S507:timepattern(1) = timeweight。
In the above steps S502 to S505, sum =is obtained in an accumulation manner
Figure 229569DEST_PATH_IMAGE003
In step S506, in order to make the value of the temporal coding information timeattetern (i) positive, in step S501, when the value of the temporal weight timeweight is set, the sum of the temporal weight timeweight and the spatial coding weight spatialweight is not greater than 1 in combination with the above step S401.
In the above steps S503 to S505, values of the next (smoothennum-2) time series coding information timetable (i) in the time series coding sequence timetable are sequentially determined in a manner of accumulating the iteration variable i, and each time series coding information is smaller than 1.
In the above step S506, the values of the next (smoothnum-2) time sequence coding information in the time sequence coding sequence timetable are respectively normalized, and are multiplied by the time sequence weight in advance.
In combination with step S507, there are (smoothnum-1) time series coded information timetable (i) in the time series coded sequence timetable.
In the above, the iteration variable i = smoothnum-j +1, and referring to step S503, the larger j is, the smaller smoothnum-j +1 is, and the larger the reciprocal of smoothnum-j +1 is. Referring to step S606 described below, the smaller the distance smoothnum-j +1 in time sequence between the jth reference frame picture and the current frame picture, the greater the weight corresponding to the jth reference frame picture when used for determining primary coding information of the current frame in a weighted manner, that is, the distance decreases gradually.
In some embodiments, in step S306, the generated edge extraction template edgepattern has a Q of 9 squares, 16 squares, or the like 2 The number of the multi-grid (that is, the number of the grids Q on each row is equal to the number of the grids Q on each column). In the edge extraction template edgepattern of the 9-grid shown in fig. 4, the value in the central grid is 9, and the values in the edge grids are the same and are-1. Referring to the foregoing description, in step S314, the generated edge extraction template edgepattern is used to perform sliding convolution on the image to enhance details, which is not described in detail.
In some embodiments, as shown in fig. 3D, in step S307, for each frame image with a frame number greater than the time series reference frame number, a primary coding information sequence linear _ weight is calculated by using the line mean information reference queue linear _ vector and the time series coding sequence timetable, which includes the following steps S601 to S607.
S601: and acquiring the number rows of the images, and acquiring a time sequence reference frame number smoothnum.
S602: the row iteration variable i = 1 and the frame iteration variable j = 1 are initialized.
S603: and judging whether the row iteration variable is less than or equal to the image row number, namely i is less than or equal to rows. If yes, go to step S604, otherwise go to step S608.
S604: and judging whether the iteration variable of the frame number is less than or equal to the time sequence reference frame number, namely j is less than or equal to smoothnum. If yes, go to step S606, otherwise execute step S605.
S605: the update row iteration variable i = i +1, the update frame number iteration variable j = 1, and the process skips to step S603.
S606: calculating the ith primary coding information, namely linear _ weight (i) = linear _ weight (i) + timetable (smooth-j +1) linear _ vector (j, i) in the primary coding information sequence by using the line mean information reference queue and the time sequence coding sequence;
s607: j = j +1, and jumps to step S604.
In the above steps S604, S606 and S607, the line mean value of the ith line of each reference frame image recorded in the line mean value information reference queue linear _ vector is weighted with the time-series coded information corresponding to each reference frame image, so as to obtain the ith primary coded information in the primary coded information sequence, where the ith primary coded information corresponds to the ith line of the current image.
In the above steps S603 to S607, the line mean of each reference frame described in the line mean information reference queue is used to obtain the primary coding information sequence linear _ weight for each line of the current picture after weighting the time-series coding information corresponding to each reference frame picture. In this way, the primary encoding information for the current frame image is associated with the line mean information of each reference frame image in time sequence through the time sequence encoding processing.
In some embodiments, as shown in fig. 3E, in step S307, for each frame image with a frame number greater than the time-series reference frame number, the primary coding information sequence _ weight and the secondary coding information sequence _ weight _ code are calculated by using the line mean information reference queue linemean _ vector, attern of the time-series coding sequence time, and spatial coding sequence lineattern, including the following steps S608 to S625.
S608: initializing a spatial iteration variable linelop = 1.
S609: and judging whether the space iteration variable is less than or equal to the rows of the image or not, namely whether the space iteration variable is less than or equal to the rows of the image, namely whether the space iteration variable is less than or equal to the rows of rows or not. If yes, go to step S610; otherwise, the encoding for the current frame image is finished.
S610: judging whether 1/2 that the space iteration variable is less than or equal to the space coding parameter is met, namely whether linelop is less than or equal to lineparam/2, if so, executing step S611; otherwise, go to step S616.
S611: the local iteration variable i = 1 is initialized.
S612: and judging whether 1/2 that the local iteration variable is less than or equal to the space coding parameter is met, namely whether i is less than or equal to lineparam/2 is met. If yes, go to step S613; otherwise, go to step S615.
S613: based on the spatial coding sequence linepeak and the line mean information sequence onelinepeak _ pre of the current frame image calculated in step S303, linepeak) = linepeak _ weight (linepeak) + 2 linepeak (i) onelinepeak _ pre (linepeak + i) of the first primary coding information sequence determined in step S606 is calculated again.
S614: the local iteration variable i = i +1 is updated, and the process proceeds to step S612.
S615: lineboost = lineboost +1, jumping to step S609.
In the above steps S611, S612, S613, and S614, the linemean information sequence onelinemean _ pre of the first linelop line of the current frame image is weighted by the spatial coding sequence lineprant for 2 times, and the primary coding information linecoverage _ weight (linelop) for the first linelop line of the image determined in the above step S606 is updated.
Referring to step S610, the value of the space iteration variable linelop is 1 to lineparam/2; referring to step S406, the spatial coding sequence linepraten includes linepram/2 spatial coding information, and the value of each linepraten (i) is less than 1. In this way, the lineparam/2 sub-step S613 is repeatedly performed, and the line mean information sequence onelinemean _ pre (linelop) of the first linelop line of the current frame image is weighted by the spatial coding sequence linepraten and the line mean information of the adjacent and following common lineparam/2 lines of the current frame image.
In the above steps S609, S610, S611, S612, S613, S614, and S615, the spatial domain coding sequence linebattern is used to sequentially weight the line average information of the 1 st line to the first linearam/2 th line of the current frame image, and the primary encoding information linecoverage _ weight (linewood) respectively corresponding to the 1 st line to the first linearam/2 th line of the current frame image determined in the above step S606 is updated.
S616: and judging whether the upper critical integer of the space iteration variable larger than the difference between the image line number and 1/2 of the space coding parameter is met, namely whether the lineroll is larger than or equal to rows-lineparam/2+ 1. If yes, step S617 is executed, otherwise, step S621 is skipped. That is, when the spatial iteration variable linelop is greater than or equal to lineparam/2+1 and linelop is less than or equal to rows-lineparam/2, the process jumps to step S621.
S617: the local iteration variable i = 1 is initialized.
S618: and judging whether 1/2 that the local iteration variable is smaller than the spatial coding parameter is met, namely whether i is less than or equal to lineparam/2 is met, if so, executing the step S619, otherwise, turning to the step S615.
S619: the first linear primary coded information, linear _ weight (linear) = linear _ weight (linear) + 2 linear attern (i) — onelinebreast _ pre (linear-i), in the primary coded information sequence determined in the foregoing step S606 is calculated again.
S620: the local iteration variable i = i +1 is updated, and the process jumps to step S618.
In the above steps S617, S618, S619, and S620, the linemean information sequence onelinemean _ pre of the first linelop line of the current frame image is weighted by the spatial coding sequence lineprant for 2 times, and the primary coding information linecoverage _ weight (linelop) for the first linelop line of the image determined in the above step S606 is updated.
Referring to step S610, the value of the space iteration variable linelop is 1 to lineparam/2; referring to step S406, the spatial coding sequence linepraten includes linepram/2 spatial coding information, and the value of each linepraten (i) is less than 1. In this way, the linearam/2 sub-step S619 is repeatedly performed, and the line mean information of the first linelop line of the current frame image is weighted by the spatial coding sequence linepraten and the line mean information of the immediately adjacent and preceding common linearam/2 line of the current frame image.
In the above steps S609, S616, S617, S618, S619, S620, and S615, the spatial coding sequence lineattern is used to sequentially weight the line mean information of the (row-linearam/2 +1) th line to the (row-linearam/2 +1) th line of the current frame image, and the primary coding information lineverage _ weight (linetools) respectively corresponding to the (row-linearam/2 +1) th line to the (row) th line of the current frame image determined in the above step S606 is updated.
S621: the local iteration variable i = 1 is initialized.
S622: and judging whether 1/2 that the local iteration variable is smaller than the spatial coding parameter is met, namely whether i is less than or equal to lineparam/2 is met, if so, executing step S623, otherwise, turning to step S615.
S623: the first linear primary coded information, linear _ weight (linear) = linear _ weight (linear) + linear, and (i) onelineman _ pre (linear + i), in the primary coded information sequence determined in the foregoing step S606 is calculated again.
S624: the first linear primary encoding information in the primary encoding information sequence determined in the foregoing step S623 is calculated again: lineverage _ weight (linetools) = lineverage _ weight (linetools) + lineattron (i) onelinemen _ pre (linetools-i).
S625: the local iteration variable i = i +1 is updated, and the process jumps to step S622.
In the above steps S621, S622, S623, S624, and S625, the line mean information sequence onelinemean _ pre of the first linelop line of the current frame image is weighted by the spatial coding sequence linepraten, and the primary coding information linecoverage _ weight (linelop) for the first linelop line of the image determined in the above step S606 is updated. Referring to step S610, the value of lineroll is 1 to lineparam/2; referring to step S406, the spatial coding sequence linepraten includes linepram/2 spatial coding information, and the value of each linepraten (i) is less than 1. In this way, the lineparam/2 steps S624 and S625 are repeatedly executed, and the line mean information of the linelop-th line of the current frame image is weighted by using the spatial coding sequence linepratt, the line mean information of the adjacent and preceding common lineparam/2 line of the current frame image, and the line mean information of the adjacent and following common lineparam/2 line of the current frame image.
In the above steps S609, S610, S616, S621, S622, S623, S624, S625, and S615, the line mean information sequence onelinemean _ pre from the (linearam/2 +1) th line to the (row-linearam/2) th line of the current frame image is sequentially weighted by the spatial coding sequence lineatlas, and the primary coding information corresponding to the (linearam/2 +1) th line to the (row-linearam/2) th line of the current frame image determined in the above step S606 is updated.
As described above, in steps S608 to S625, the line mean value of each line of the current frame image described in the line mean value information is weighted by the spatial domain coding information described in the spatial domain coding sequence, and the primary coding information sequence linear _ weight for each line of the current frame image is obtained. In this way, the primary coding information for each line of the current frame image is associated with the line mean value information of the other lines of the current frame image by the spatial coding processing. Wherein, according to the value of the spatial coding parameter linepram (not more than rows/2), all rows of the image are sequentially divided into 3 parts from top to bottom. Wherein, the first part comprises linearam/2 lines and adopts a first spatial coding mode; the second part comprises (rows-lineparam) lines and adopts a second spatial coding mode; the third part comprises linearam/2 lines and adopts a third spatial coding mode. In the above, the spatial coding parameter lineparam is used to adjust the inter-line span of the spatial information to be utilized.
In the above steps S601 to S625, the steps are executed sequentially or jumped to execute by referring to the numbers of the steps, and the same iteration variable appearing in sequence, such as i, is executed and updated in the step related thereto.
In the above steps S601 to S607, the time-series encoding is performed according to the time-series encoding sequence and the line average value information corresponding to the images of the total reference frame number, and the primary encoded information sequence is generated. In the above steps S608 to S625, the spatial coding is performed based on the spatial coding sequence and the line mean information of the current frame image, and the primary coded information sequence is generated. Further, steps S608 to S625 inherit the primary coded information sequence after the time-series coding processing of steps S601 to S607. As described above, from step S601 to step S625, it is considered that the time-domain coding is performed first and then the space-domain coding is performed.
In some embodiments, referring to the above description, spatial coding may be performed first, and then time-series coding is performed, so that a subsequent time-series coding inherits a primary coded information sequence after previous spatial coding processing, which has the same technical effect and is not described again.
Referring to the above description, in step S309, the primary coded information sequence linear _ weight generated in step S307 is subjected to secondary coding processing using a predetermined frequency domain coding coefficient group. Specifically, a two-dimensional digital frequency domain filter is designed according to the characteristics of the infrared image in the frequency domain, and the determined frequency domain coding coefficient group of the two-dimensional digital frequency domain filter is adopted. For example, a and b are used to indicate a first dimension and a second dimension, respectively, and the number of coefficients in each dimension is also referred to as the encoding width. In this way, in the frequency domain coding coefficient group, the number of coding coefficients is 2 times the coding width. If the coding width is 3, 6 coding coefficients are assigned, which are respectively denoted as a (0), a (1), a (2), b (0), b (1), and b (2). The two-dimensional digital frequency domain filter may have any one or any combination of low-pass, high-pass, band-stop, notch, etc., and the frequency domain characteristics of the filter are switched by the coding coefficients.
In some embodiments, a two-dimensional digital frequency-domain filter with a coding width of 2 is designed for an infrared image collected by a certain type of infrared device, and 4 coding coefficients corresponding to a frequency-domain coding coefficient group are determined to be a (0), a (1), b (0), and b (1). As shown in fig. 3F, when performing the secondary encoding process on the primary encoded information sequence linear _ weight generated in step S307 using the frequency domain encoding coefficient set, the following steps S701 to S715 are included.
S701: setting a coding width = 2, initializing values of coding coefficients a (0), a (1), b (0), b (1), e.g., setting a (0) = 1, a (1) = -0.917797364002063; b (0) = 0.041101317998969, b (1) = 0.041101317998969; and acquiring the number rows of the image.
S702: an initialization line number iteration variable i = 2, an initialization encoding width iteration variable j = 0, and an initialization local iteration variable n = 0.
S703: initializing a first value in a secondary coded information sequence, i.e., setting linear _ weight _ code (1) = b (0) × linear _ weight (1).
S704: and judging whether the row number iteration variable is less than or equal to the image row number, namely whether i is less than or equal to rows, if so, executing the step S705, and otherwise, ending the secondary coding.
S705: each numerical value in the secondary coded information sequence linearity _ weight _ code is set, that is, for example, linearity _ weight _ code (i) = 0 is set.
S706: the encoding width iteration variable j = 0 is set.
S707: and judging whether the iteration variable meeting the coding width is smaller than the coding width, namely, whether j < width is met. If so, step S708 is performed, otherwise step S711 is performed.
S708: and judging whether the row number iteration variable is larger than the encoding width iteration variable, namely, whether i > j is satisfied. If yes, step S709 is executed, otherwise step S710 is executed.
S709: the i-th numerical value, linedirection _ weight _ code, (i) = linedirection _ weight _ code, (i) + b (j) linedirection _ weight (i-j), in the secondary encoded information sequence is updated by using the primary encoded information sequence linedirection _ weight and the encoding coefficient generated in the step S307.
S710: the encoding width iteration variable is updated, e.g., j = j +1 is set, and the process proceeds to step S707.
S711: and judging whether the local iteration variable is not more than (width-1), namely, whether n < width-1 is satisfied. If so, step S712 is performed, otherwise step S715 is performed.
S712: and judging whether the row number iteration variable is larger than the sum of the local iteration variable and 1, namely, whether i > n +1 is satisfied, namely, i is smaller than or equal to n. If so, step S713 is performed, otherwise step S714 is performed.
S713: updating the i-th numerical value of the secondary coded information sequence, namely, linear _ weight _ code (i) = (linear _ weight _ code (i)) -a (n +1) × linear _ weight _ code (i-n-1))/a (0).
S714: the local iteration variable n = n +1 is updated, and the process jumps to step S711.
S715: the iteration variable i = i +1 for the updated row number goes to step S704.
In the above steps S706 to S710, the ith value (starting from the second value) in the secondary coding information sequence linear _ weight _ code (i) is updated by using the second dimension b in the frequency domain coding coefficient set; in steps S711 to S713, the ith value linear _ weight _ code (i) in the secondary coded information sequence is updated (from the second value) using the first dimension a in the frequency domain coded coefficient set.
In the above steps S704 to S715, the secondary coding information sequence linear _ weight _ code is updated for the image line by line one by one, so as to implement the secondary coding based on the frequency domain.
In some embodiments, as shown in fig. 3G, in step S310, encoding and fusing the images by using the line mean information sequence onelinemean _ pre of the current frame image calculated in step S303 and the secondary encoding information sequence linear _ weight _ code after the secondary encoding process in step S309, including the following steps S801 to S808.
S801: the number of rows and the number of columns cols of the Image are acquired.
S802: the row iteration variable i = 1 is initialized, and the column iteration variable j = 1 is initialized.
S803: and judging whether the row iteration variable is less than or equal to the rows of the image, namely whether i is less than or equal to rows, if so, executing the step S804, otherwise, indicating that the coding information fusion is completed.
S804: update line mean difference value diff (i) = onelinemean _ pre (i) -linear _ weight _ code (i).
S805: and judging whether the column iteration variable is less than or equal to the column number cols of the image, namely whether j is less than or equal to cols. If so, step S807 is executed, otherwise step S806 is executed.
S806: the row iteration variable i = i +1 is updated, and the column iteration variable j = 1 is updated, and the process proceeds to step S803.
S807: the update Image _ process (i, j) = Image (i, j) -diff (i).
S808: update column iteration variable j = j +1, and jump to step S805.
In the above steps S807 to S808, the feature values corresponding to the pixel points in each column are updated for the ith row one by one.
In the above steps S803 to S808, the feature values corresponding to the pixel points in the image are updated line by line.
The above description is only exemplary of the present application and should not be taken as limiting the present invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

Claims (6)

1. A method for removing infrared image horizontal stripes by space-time-frequency joint compact coding is characterized by comprising the following steps:
determining line mean value information aiming at the acquired infrared image;
performing time sequence coding according to the determined line mean value information and a predetermined time sequence coding sequence to determine a primary coding information sequence after the time sequence coding:
weighting the line mean value of the corresponding line of each reference frame image recorded in the line mean value information reference queue and the time sequence coding information corresponding to each reference frame image respectively to obtain a primary coding information sequence after time sequence coding aiming at the acquired infrared image; the number of the row mean value information sequences recorded in the row mean value information reference queue is a predetermined time sequence reference frame number, and the number of the time sequence coding information recorded in the time sequence coding sequence is the predetermined time sequence reference frame number; the row mean value information sequence recorded in the row mean value information reference queue comprises a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to the acquired infrared image and a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to other time sequence reference frame images, and the other time sequence reference frame images are acquired in time sequence before the acquired infrared image;
according to the determined line mean information and a predetermined space domain coding sequence, space domain coding is carried out to determine a primary coding information sequence after space domain coding:
dividing the acquired infrared image into 3 mutually non-overlapping first parts, second parts and third parts according to the sequence of rows;
performing space domain coding on each line in the first part by adopting a first space domain coding mode;
performing spatial coding on each line in the third part by adopting a third spatial coding mode;
performing space-domain coding on each line in the second part by adopting a first space-domain coding mode and a third space-domain coding method;
according to the determined line mean information and a predetermined spatial coding sequence, respectively carrying out spatial coding on each line in the first part, wherein the spatial coding comprises the following steps:
weighting the line mean value information of the common lineparam/2 lines, which are adjacent to and behind the corresponding line, in the obtained infrared image with each space domain coding information in the space domain coding sequence respectively to obtain a space domain coded primary coding information sequence corresponding to the corresponding line of the obtained infrared image;
according to the determined line mean information and a predetermined spatial coding sequence, respectively performing spatial coding on each line in the third part, wherein the spatial coding comprises the following steps:
weighting the line mean value information of a common linearam/2 line, which is adjacent to and before the corresponding line, in the obtained infrared image with each space domain coding information in a space domain coding sequence to obtain a primary coding information sequence after space domain coding corresponding to the corresponding line of the obtained infrared image, wherein linearam is a space coding parameter, the value of the linearam is not more than the total line number of the obtained infrared image, and the space domain coding sequence comprises linearam/2 space domain coding information;
according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding and a predetermined frequency domain coding coefficient set, performing space-time-frequency joint compact coding to determine a secondary coding information sequence after the space-time-frequency joint compact coding:
determining a primary coding information sequence after space-time coding according to the determined primary coding information sequence after space-domain coding and the determined primary coding information sequence after time-sequence coding;
performing space-time-frequency joint compact coding according to the primary coding information sequence subjected to space-time coding and a predetermined frequency domain coding coefficient set to determine a secondary coding information sequence subjected to space-time-frequency joint compact coding;
according to the secondary coding information sequence after the space-time-frequency joint compact coding and the determined line mean value information, carrying out coding fusion on the obtained infrared image so as to determine a coded and fused infrared image:
determining a compensation value of each corresponding line according to the secondary coding information after the space-time-frequency joint compact coding and the determined line mean value information;
and compensating the characteristic values of the pixel points in each row in each corresponding row by using the compensation values of each corresponding row to determine an infrared image after encoding fusion, wherein the infrared image after encoding fusion is an infrared image after removing the transverse striations.
2. The method of claim 1, further comprising:
acquiring a predetermined time sequence weight coefficient and a predetermined time sequence reference frame number;
and generating time sequence weights respectively corresponding to the time sequence reference frames according to the time sequence weight coefficient and the time sequence reference frame number, wherein the time sequence reference frames are respectively equal to or earlier than the acquired infrared images in time sequence, and the time sequence weights respectively corresponding to the time sequence reference frames are decreased progressively along with the increase of the time sequence reference frame interval from the acquired infrared images in time sequence.
3. The method of claim 2, further comprising:
and generating the spatial coding sequence comprising linearam/2 spatial coding information according to the spatial coding weight coefficient and the spatial coding parameters, wherein the spatial coding information corresponds to spatial weights corresponding to reference lines of the current line of the acquired infrared image in the line sequence, and the spatial weights corresponding to the reference lines respectively decrease with the increase of the interval between the reference lines and the current line in the line sequence.
4. The method of claim 3,
the predetermined set of frequency domain coding coefficients comprises a number of coding coefficients that is 2 times the coding width;
after the obtained infrared image is coded and fused, the method further comprises the following steps:
adopting an edge extraction template with multiple grids for the determined coded and fused infrared image, and performing sliding convolution on the coded and fused infrared image to obtain an edge-enhanced infrared image;
carrying out contrast correction processing on the determined infrared image subjected to coding fusion to obtain an infrared image subjected to contrast correction processing;
and weighting the infrared image after the edge enhancement processing and the infrared image after the contrast correction processing to obtain the infrared image after the cross striations are removed.
5. An infrared image striation removal device, comprising:
the space-time-frequency joint compact coding unit is used for determining line mean value information aiming at the acquired infrared image;
performing time sequence coding according to the determined line mean value information and a predetermined time sequence coding sequence to determine a primary coding information sequence after time sequence coding: weighting the line mean value of the corresponding line of each reference frame image recorded in the line mean value information reference queue and the time sequence coding information corresponding to each reference frame image respectively to obtain a primary coding information sequence after time sequence coding aiming at the acquired infrared image; the number of the row mean value information sequences recorded in the row mean value information reference queue is a predetermined time sequence reference frame number, and the number of the time sequence coding information recorded in the time sequence coding sequence is the predetermined time sequence reference frame number; the row mean value information sequence recorded in the row mean value information reference queue comprises a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to the acquired infrared image and a row mean value sequence formed by row mean values respectively corresponding to all rows determined according to other time sequence reference frame images, and the other time sequence reference frame images are acquired in time sequence before the acquired infrared image;
performing space-domain coding according to the determined line mean information and a predetermined space-domain coding sequence to determine a space-domain coded primary coding information sequence: dividing the acquired infrared image into 3 mutually non-overlapping first parts, second parts and third parts according to the sequence of rows; performing space domain coding on each line in the first part by adopting a first space domain coding mode; performing spatial coding on each line in the third part by adopting a third spatial coding mode;
performing space-domain coding on each line in the second part by adopting a first space-domain coding mode and a third space-domain coding method; according to the determined line mean information and a predetermined spatial coding sequence, respectively carrying out spatial coding on each line in the first part, wherein the spatial coding comprises the following steps: weighting the line mean value information of the common lineparam/2 lines, which are adjacent to and behind the corresponding line, in the obtained infrared image with each space domain coding information in the space domain coding sequence respectively to obtain a space domain coded primary coding information sequence corresponding to the corresponding line of the obtained infrared image; according to the determined line mean information and a predetermined spatial coding sequence, respectively performing spatial coding on each line in the third part, wherein the spatial coding comprises the following steps: weighting the line mean value information of a total linearam/2 line, which is adjacent to and before the corresponding line, in the obtained infrared image with each space domain coding information in a space domain coding sequence to obtain a space domain coded primary coding information sequence corresponding to the corresponding line of the obtained infrared image, wherein linearam is a space coding parameter, the value of the linearam is not more than the total line number of the obtained infrared image, and the space domain coding sequence comprises linearam/2 space domain coding information;
according to the determined primary coding information sequence after the space-domain coding, the determined primary coding information sequence after the time-sequence coding and a predetermined frequency domain coding coefficient set, performing space-time-frequency joint compact coding to determine a secondary coding information sequence after the space-time-frequency joint compact coding: determining a primary coding information sequence after space-time coding according to the determined primary coding information sequence after space-domain coding and the determined primary coding information sequence after time-sequence coding;
performing space-time-frequency joint compact coding according to the primary coding information sequence subjected to space-time coding and a predetermined frequency domain coding coefficient group to determine a secondary coding information sequence subjected to space-time-frequency joint compact coding;
the encoding fusion unit is used for encoding and fusing the acquired infrared image according to the secondary encoding information sequence after the space-time-frequency joint compact encoding and the determined line mean value information so as to determine an infrared image after encoding and fusing: determining a compensation value of each corresponding line according to the secondary coding information after the space-time-frequency joint compact coding and the determined line mean value information; and compensating the characteristic values of the pixel points in each row in each corresponding row by using the compensation values of each corresponding row to determine an encoded and fused infrared image, wherein the encoded and fused infrared image is an infrared image with the transverse striations removed.
6. A mobile infrared device, comprising:
an infrared detector for movably acquiring an infrared image;
the infrared image cross grain removing device according to claim 5, which is configured to remove cross grains of the infrared image acquired by the infrared detector.
CN202210763142.8A 2022-07-01 2022-07-01 Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment Active CN114841899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763142.8A CN114841899B (en) 2022-07-01 2022-07-01 Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763142.8A CN114841899B (en) 2022-07-01 2022-07-01 Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment

Publications (2)

Publication Number Publication Date
CN114841899A CN114841899A (en) 2022-08-02
CN114841899B true CN114841899B (en) 2022-09-23

Family

ID=82573289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763142.8A Active CN114841899B (en) 2022-07-01 2022-07-01 Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment

Country Status (1)

Country Link
CN (1) CN114841899B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106037657B (en) * 2016-06-28 2017-11-21 丹阳慧创医疗设备有限公司 A kind of high density near infrared spectrum cerebral function imaging method of time space frequency multiple coupling
EP3577812A1 (en) * 2017-02-06 2019-12-11 Telefonaktiebolaget LM Ericsson (publ) Mixed space time and space frequency block coding
CN114414067B (en) * 2022-04-01 2022-07-15 深圳市海清视讯科技有限公司 Thermal imaging data processing method and device, thermal imaging photographic equipment and storage medium

Also Published As

Publication number Publication date
CN114841899A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN107665498B (en) Full convolution network aircraft detection method based on typical example mining
CN110555818B (en) Method and device for repairing cloud region of satellite image sequence
JP2022122984A (en) Image processing system, image processing method, and program
US20150030234A1 (en) Adaptive multi-dimensional data decomposition
CN104637068B (en) Frame of video and video pictures occlusion detection method and device
CN109191498B (en) Target detection method and system based on dynamic memory and motion perception
US9165341B2 (en) Method for generating super-resolution images having improved image resolution and measuring device
US20170301072A1 (en) System and method for adaptive pixel filtering
CN110570395A (en) Hyperspectral anomaly detection method based on spatial-spectral combined collaborative representation
CN112513936A (en) Image processing method, device and storage medium
CN112560619A (en) Multi-focus image fusion-based multi-distance bird accurate identification method
Gong et al. User-assisted image shadow removal
CN109934789A (en) Image de-noising method, device and electronic equipment
JP7240181B2 (en) Video processing device and program
CN114187062B (en) Commodity purchase event prediction method and device
CN114841899B (en) Method for removing infrared image horizontal stripes by space-time-frequency combined compact coding and infrared equipment
KR101982258B1 (en) Method for detecting object and object detecting apparatus
CN111931744B (en) Method and device for detecting change of remote sensing image
CN112085684B (en) Remote sensing image fusion method and device
EP3373578A1 (en) Image difference detection device, method for detecting image difference, and computer program
Kim et al. ADOM: ADMM-based optimization model for stripe noise removal in remote sensing image
ITTO20090161A1 (en) EQUALIZATION AND PROCESSING OF IR IMAGES
Hung et al. Moran’s I for impulse noise detection and removal in color images
CN111932466B (en) Image defogging method, electronic equipment and storage medium
Chang et al. Super resolution using trilateral filter regression interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant