CN116385495A - Moving target closed-loop detection method of infrared video under dynamic background - Google Patents

Moving target closed-loop detection method of infrared video under dynamic background Download PDF

Info

Publication number
CN116385495A
CN116385495A CN202310428567.8A CN202310428567A CN116385495A CN 116385495 A CN116385495 A CN 116385495A CN 202310428567 A CN202310428567 A CN 202310428567A CN 116385495 A CN116385495 A CN 116385495A
Authority
CN
China
Prior art keywords
image
tracking
frame
detection
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310428567.8A
Other languages
Chinese (zh)
Inventor
王勇
霍礼乐
范云生
刘婷
王国峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202310428567.8A priority Critical patent/CN116385495A/en
Publication of CN116385495A publication Critical patent/CN116385495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a moving target closed-loop detection method of infrared video under a dynamic background, which comprises the following steps: analyzing the noise type of the infrared image, and removing noise from the image by using a filtering algorithm; different interference corner filtering methods are adopted for different frames, so that coarse elimination and fine elimination of corners are carried out, and final detection corners are stored and recorded; tracking the angular points by adopting a sparse optical flow method, filtering tracking points by using bidirectional tracking, calculating a homography matrix according to the relation between the angular point sets of the front frame and the rear frame, and performing background compensation on the current frame image by using the homography transformation matrix; differentiating the infrared image of the previous frame with the current frame image after background compensation, performing self-adaptive gray threshold binarization processing on the differential result, and performing morphological operation on the binary image to obtain a final target position; and forming a mask according to the detection target position of the previous frame, and feeding back to the corner detection of the next frame to form a complete closed loop detection circuit.

Description

Moving target closed-loop detection method of infrared video under dynamic background
Technical Field
The invention relates to the field of infrared moving target detection, in particular to a moving target closed-loop detection method of infrared video under a dynamic background.
Background
The detection of the moving target is one of the core research subjects in the field of computer vision, is the basis of target tracking, target identification and target behavior understanding, and has wide application prospects in the fields of military, security monitoring, industrial automation, intelligent transportation and the like. The moving object detection can be classified into moving object detection under a static background and moving object detection under a dynamic background according to whether the shooting platform or the camera moves or not. The detection of the moving target in the static background is that the camera is static in the shooting process, and the obtained video sequence only contains the movement of the target; in the moving object detection under the dynamic background, in the shooting process, a shooting platform or a camera and a moving object are changed at the same time, the changes of the shooting platform and the camera comprise translation, rotation, scaling and the like, the obtained video sequence not only comprises the movement of the object, but also comprises the movement of the background, and compared with the object detection under the static background, the difficulty is greatly improved. The method mainly adopted for detecting the moving target in the dynamic background is background compensation, the background compensation is carried out on the current frame by calculating a transformation matrix of the previous frame and the current frame, and then the moving target is detected by using a frame difference method, wherein the accuracy of the background compensation can directly influence the detection accuracy.
Compared with a visible light image, an infrared image has the characteristics of lower contrast, poorer resolution, more noise and the like, which can lead to that angular points or characteristic points extracted by the infrared image are less and more concentrated than those extracted by the visible light in the background compensation process. This affects the accuracy of the calculation of the subsequent transformation matrix, affects the background compensation accuracy, and furthermore, when background compensation is performed, it is found that corner points or feature points in the background play a positive role in the transformation matrix calculation, while feature points in the moving target object region may hinder the accurate registration of the background. Compared with visible light, the angular points or the characteristic points extracted from the infrared image have larger ratio of the characteristic points in the moving object region in the whole, which may lead to increased probability of selecting the internal points as the characteristic points in the moving object region when the PROSAC algorithm performs characteristic point screening, and inaccurate registration can be caused to influence the final detection result.
Disclosure of Invention
According to the problems existing in the prior art, the invention discloses a moving target closed-loop detection method of infrared video under a dynamic background, which specifically comprises the following steps:
analyzing the noise type of the infrared image, and removing noise from the image by using a filtering algorithm;
different interference corner filtering methods are adopted for different frames, so that coarse elimination and fine elimination of corners are carried out, and final detection corners are stored and recorded;
tracking the angular points by adopting a sparse optical flow method, filtering the tracking points by adopting a bidirectional tracking mode, determining the corresponding position relation of the angular points of the previous frame in the current frame, carrying out backward tracking by using the current tracking points, screening the two groups of angular points, and eliminating angular points with backward tracking failure;
carrying out homography matrix calculation according to the relation between the angular point sets of the front frame and the rear frame, and carrying out background compensation on the current frame image by using the homography transformation matrix;
differentiating the infrared image of the previous frame with the current frame image after background compensation, performing self-adaptive gray threshold binarization processing on the differential result, and performing morphological operation on the binary image to obtain a final target position;
and forming a mask according to the detection target position of the previous frame, and feeding back to the corner detection of the next frame to form a complete closed loop detection circuit.
Further, a Gaussian filter is utilized to filter noise of the infrared image of the current frame, and Gaussian white noise in the infrared image is removed; a median filter is used to filter out random punctiform noise in the infrared image.
Further, judging whether the current frame is the infrared image of the first five frames, if so, performing coarse elimination, otherwise, performing fine elimination;
dividing the infrared image into subblocks with the same size, and a subblock sequence i 1 To i 25
From i 1 Starting to perform corner detection on each small block by using a Shi-Tomasi algorithm, and sorting the number of the corners in each small block from small to large after the detection is completed;
removing 5 sub-blocks with the largest number and 5 sub-blocks with the smallest concentration, storing the final corner points,
if the image is not the previous five frames, the detection result of the previous frame is fed back to the current frame during detection, and the mask generated by the moving target area of the previous frame is used for not detecting the corner point of the area with the mask area of zero on the current frame;
Figure BDA0004189551520000021
carrying out Shi-Tomasi corner detection on other areas to finish fine elimination, so that the whole detection method realizes closed-loop detection;
and storing the final corner detection result.
Further, tracking each obtained corner in the current frame by using LK optical flow pyramid algorithm to obtain the corner P of the previous frame i (x 0 ,y 0 ) At the position P of the current frame i (x 1 ,y 1 ) Repeating the steps until all the corner points are calculated, and storing all the tracking points;
and performing back tracking on the tracking point set by using the LK optical flow pyramid algorithm again to obtain a tracking point P of the current frame i (x 1 ,y 1 ) At position P of the previous frame i (x 2 ,y 2 ) Repeating the steps until all tracking points are calculated, and storing all tracking point pairs which are reversely tracked;
removing the forward tracking point pairs according to the output state vector;
if the output state vector is judged to be 1, the corner point of the previous frame and the corresponding tracking point of the current frame are stored, and if the output state vector is judged to be 0, the corresponding point pair is removed;
performing the same removal strategy on the back tracking set;
screening the forward tracking point pair set and the backward tracking point pair set, and taking out the point P i (x 1 ,y 1 ) Compare P in its corresponding set i (x 0 ,y 0 ) And P i (x 2 ,y 2 ) If the x coordinate and the y coordinate of the two points are the same, the bidirectional tracking is successful, and P is calculated i (x 0 ,y 0 ) And P i (x 1 ,y 1 ) Adding the tracking success set;
repeating the above operation until the screening of all the point pairs is completed.
Further, calculating an optimal homography transformation matrix H of the two frames of infrared images by adopting a PROSAC algorithm for the two groups of corner sets corresponding to the previous frame and the current frame, and for the corner P of the previous frame i (x 0 ,y 0 ) Tracking point P corresponding to current frame i (x 1 ,y 1 ) The following relationship should be satisfied:
Figure BDA0004189551520000031
and performing background compensation on the current frame by using the optimal homography transformation matrix H, and respectively performing interpolation in the x direction and the y direction of the pixel point by adopting a bilinear interpolation mode so as to perform image correction.
Performing differential operation on the infrared image of the previous frame and the compensated infrared image, and performing Gaussian filtering on the differential image to remove noise;
threshold segmentation is carried out on the differential image by using an Otsu algorithm, and a binary image suspected to be a moving target is obtained;
performing corrosion operation on the binary image suspected to be the moving target, removing discrete noise and linear noise interference, performing expansion operation, marking and filtering small areas after expansion, and performing expansion operation again to obtain a final moving target binary image;
calculating the outline of the moving object according to the final moving object binary image, and storing the outline;
traversing each target contour, drawing an external rectangle on the infrared image of the current frame according to the contour, and storing the position of the rectangle and the length and width of the rectangle;
repeating the steps until all the outlines are traversed to obtain a final moving target detection result diagram.
Further, creating a single-channel mask image with the same size and type as the infrared image of the current frame and set to be white;
taking out an unprocessed external rectangular frame from the stored rectangular frame set, acquiring the position and the size of the rectangular frame, expanding the length and the width of the rectangular frame outwards by m pixel points, mapping the expanded rectangular frame into a mask image, setting the gray value of the pixel points in the mask image rectangular frame to be 0, adding the processed rectangular frame into the processed set, continuing to process the next rectangular frame, and repeating the steps until all the rectangular frames are processed;
and obtaining a mask image of the moving object, and feeding back the mask image as initial information to the next frame detection to realize closed loop detection.
By adopting the technical scheme, the method for detecting the moving target closed loop of the infrared video under the dynamic background provided by the invention uses corner homogenization treatment in the initial detection stage, performs optical flow tracking on the homogenized corner, and improves the calculation accuracy of a transformation matrix; in the optical flow tracking stage, a bidirectional tracking algorithm is used to remove the influence of an error tracking point on background compensation, so that the accuracy of moving target detection is improved; the method has the advantages that the influence of the angular points of the moving object region of the current frame is eliminated by using the mask of the moving object region of the previous frame, and the optimal homography transformation matrix of the two frames of infrared images is subjected to iterative calculation by using the background points, so that the accuracy of background compensation is improved, the operation efficiency of the algorithm is improved, and the instantaneity of the algorithm is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method disclosed in the present invention;
FIG. 2 is a schematic diagram of an infrared image sequence in accordance with the present invention;
FIG. 3 is a schematic diagram of a corner homogenizing image block in the present invention;
FIG. 4 is a schematic diagram of sparse optical flow bidirectional tracking in the present invention;
FIG. 5 is a schematic diagram of the differential result without background compensation in the present invention;
FIG. 6 is a schematic diagram of the differential results of unused bidirectional tracking and "fine cancellation" in the present invention;
FIG. 7 is a schematic diagram of the differential results of the present invention using bi-directional tracking and "fine cancellation";
FIG. 8 is a schematic diagram of differential result threshold segmentation without bidirectional tracking and "fine cancellation" in the present invention;
FIG. 9 is a schematic diagram of threshold segmentation using bi-directional tracking and "fine cancellation" differential results in accordance with the present invention;
FIG. 10 is a schematic diagram of the final movement area monitoring results;
fig. 11 is a schematic diagram of a mask for a moving object region in the present invention.
Detailed Description
In order to make the technical scheme and advantages of the present invention more clear, the technical scheme in the embodiment of the present invention is clearly and completely described below with reference to the accompanying drawings in the embodiment of the present invention:
in the implementation process, the infrared video image collected by the equipment is shown in fig. 2, then corner homogenization and rough elimination or fine elimination are selected according to the number of frames, then optical flow bidirectional tracking is adopted to remove error point pairs, then background compensation and image difference are carried out, finally a mask is generated according to a moving object region, and the mask is fed back to the next frame detection, so that the closed loop detection of the infrared moving object is formed. The method disclosed by the invention comprises the following specific steps:
s1: preprocessing the current frame of the infrared video: according to the noise type analysis of the infrared image, processing the image by using a filtering algorithm; the method specifically adopts the following steps:
s11: and filtering the noise of the infrared image of the current frame by using a Gaussian filter of 7*7 to remove Gaussian white noise in the infrared image.
S12: then, a median filter of 3*3 is used for filtering random punctiform noise in the infrared image, the noise of the processed infrared image is obviously reduced, meanwhile, the detail information is well saved, the image quality is improved to a certain extent, and a good foundation is provided for subsequent processing.
S2: and different interference corner filtering strategies are adopted for different frames, so that the rough elimination or the fine elimination of the corner is realized, and the final detection corner is stored and recorded. The method specifically adopts the following steps:
s21: judging whether the current frame is the previous five frames of images, if so, performing 'coarse elimination', otherwise, performing 'fine elimination'.
S22: if S21 judges that the frame is the first five frames, the infrared image is divided into sub-blocks with the same size as 5*5 and the sub-block sequence i 1 To i 25 The infrared image tiles are shown in fig. 3.
S23: from i 1 And (3) starting to perform corner detection on each small block by using a Shi-Tomasi algorithm, and sorting the number of the corners in each small block from small to large after the detection is completed.
S24: and 5 sub-blocks with the largest number and 5 sub-blocks with the smallest concentration are removed, and final corner points are stored, so that the detected corner points can be more uniform, and coarse elimination is completed.
S25: if the image is not the previous five frames, the S21 judges that the image is not the previous five frames, the detection result of the previous frame is fed back to the current frame during detection, the mask generated by the moving target area of the previous frame is used, the corner detection is not carried out on the current frame at the position where the mask area is zero, and the 'fine elimination' and the moving target area of the black area are completed.
Figure BDA0004189551520000051
S26: and (3) carrying out Shi-Tomasi corner detection on other areas to finish 'fine elimination', so that the whole detection method realizes closed loop detection.
S27: and storing the final corner detection result.
S3: and tracking the calculated corner points of the previous frame by adopting a sparse optical flow method, filtering the tracking points by using bidirectional tracking, determining the corresponding position relation of the corner points of the previous frame in the current frame, performing backward tracking by using the current tracking points, screening the two groups of corner points, and eliminating the corner points with backward tracking failure. The method specifically adopts the following steps:
s31: tracking each corner obtained in S27 in the current frame by using LK optical flow pyramid algorithm to obtain a corner P of the previous frame i (x 0 ,y 0 ) At the position P of the current frame i (x 1 ,y 1 ) In infrared video, since the video is continuously time-varying, it is reasonable to consider that many points in the previous frame can be found in the next frame.
S32: and S31, repeating until all the corner points are calculated, and storing all the tracking points.
S33: and (3) back tracking the tracking point set of the S31 by using the LK optical flow pyramid algorithm again to obtain the tracking point P of the current frame i (x 1 ,y 1 ) At position P of the previous frame i (x 2 ,y 2 )。
S34: and S33, repeating the step until all the tracking points are calculated, and storing all the tracking point pairs which are reversely tracked.
S35: and removing the forward tracking point pairs according to the output state vector.
S36: if the output state vector is judged to be 1 from S32, the previous frame corner and the current frame corresponding tracking point are stored, and if the output state vector is judged to be 0 from S32, the corresponding point pair is removed.
S37: the same removal policy is applied to the back-tracked set.
S38: screening the forward tracking point pair set and the backward tracking point pair set, and taking out the point P i (x 1 ,y 1 ) Compare P in its corresponding set i (x 0 ,y 0 ) And P i (x 2 ,y 2 ) If the x coordinate and the y coordinate of the two points are the same, the bidirectional tracking is successful, the bidirectional tracking schematic diagram is shown in fig. 4, and P is calculated i (x 0 ,y 0 ) And P i (x 1 ,y 1 ) And adding the tracking success set.
S39: repeating the above operation until the screening of all the point pairs is completed.
S4: and carrying out homography matrix calculation according to the relation between the angular point sets of the front frame and the rear frame, and carrying out background compensation on the current frame image by using the homography transformation matrix. The method specifically adopts the following steps:
s41: calculating an optimal homography transformation matrix H of two frames of infrared images by adopting a PROSAC algorithm for two groups of corner sets corresponding to the previous frame and the current frame obtained in the step S36, and for the corner P of the previous frame i (x 0 ,y 0 ) Tracking point P corresponding to current frame i (x 1 ,y 1 ) The following relationship should be satisfied:
Figure BDA0004189551520000071
s42: the differential image in which the background compensation is not performed on the current frame by using the optimal homography transformation matrix H calculated in step S51 is, as shown in fig. 5, subjected to interpolation in the x-direction and the y-direction of the pixel point by bilinear interpolation in order to eliminate errors due to the offset of the pixel as much as possible during the compensation process, so as to perform image correction.
S5: and differentiating the infrared image of the previous frame with the current frame image after background compensation, performing self-adaptive gray threshold binarization on the differential result, and performing morphological operation on the binary image to obtain a final target position. The method specifically adopts the following steps:
s51: the difference operation is carried out on the infrared image of the previous frame and the compensated infrared image, the Gaussian filtering is carried out on the difference image, the noise is removed, the background compensation difference image which is not subjected to bidirectional tracking and fine elimination is shown in fig. 6, and the background compensation difference image which is subjected to bidirectional tracking and fine elimination is shown in fig. 7.
S52: the difference image is subjected to threshold segmentation by using an Otsu algorithm, a binary image which is suspected to be a moving target is obtained, a bidirectional tracking and fine elimination difference threshold segmentation image is shown in fig. 8, and a bidirectional tracking and fine elimination difference threshold segmentation image is shown in fig. 9.
S53: and (3) performing corrosion operation on the binary image of the suspected moving target, removing discrete noise and linear noise interference, performing expansion operation, marking and filtering small areas after expansion, and performing expansion operation again to obtain a final moving target binary image, wherein a final moving area detection result is shown in fig. 10.
S54: and calculating the outline of the moving object according to the final moving object binary image, and storing the outline.
S55: traversing each target contour, drawing an external rectangle on the infrared image of the current frame according to the contour, and storing the position of the rectangle and the length and width of the rectangle.
S56: repeating the steps until all the outlines are traversed, and obtaining a final moving target detection result diagram.
S6: and forming a mask according to the detection target position of the previous frame, and feeding back to the corner detection of the next frame to form a complete closed loop detection system. The method specifically adopts the following steps:
s61: a single-channel mask initial image is created with the same color as the current frame infrared image and set to white.
S62: and (3) taking out an unprocessed circumscribed rectangular frame from the rectangular frame set stored in the S55, acquiring the position and the size of the rectangular frame, expanding the length and the width of the rectangular frame outwards by m pixel points, wherein m can be adjusted according to actual conditions, mapping the expanded rectangular frame into a mask image, setting the gray value of the pixel point inside the mask image rectangular frame to be 0, adding the processed rectangular frame into the processed set, and continuing to process the next rectangular frame.
S63: repeating the steps until all the rectangular frames are processed.
S64: and (3) finishing the steps to obtain a mask image of the moving object, and feeding back the mask image as initial information to the next frame detection to realize closed loop detection, wherein the mask image is shown in fig. 11.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (7)

1. A method for detecting a moving target in a closed loop of an infrared video under a dynamic background is characterized by comprising the following steps:
analyzing the noise type of the infrared image, and removing noise from the image by using a filtering algorithm;
different interference corner filtering methods are adopted for different frames, so that coarse elimination and fine elimination of corners are carried out, and final detection corners are stored and recorded;
tracking the angular points by adopting a sparse optical flow method, filtering the tracking points by adopting a bidirectional tracking mode, determining the corresponding position relation of the angular points of the previous frame in the current frame, carrying out backward tracking by using the current tracking points, screening the two groups of angular points, and eliminating angular points with backward tracking failure;
carrying out homography matrix calculation according to the relation between the angular point sets of the front frame and the rear frame, and carrying out background compensation on the current frame image by using the homography transformation matrix;
differentiating the infrared image of the previous frame with the current frame image after background compensation, performing self-adaptive gray threshold binarization processing on the differential result, and performing morphological operation on the binary image to obtain a final target position;
and forming a mask according to the detection target position of the previous frame, and feeding back to the corner detection of the next frame to form a complete closed loop detection circuit.
2. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 1, wherein the method comprises the following steps: carrying out noise filtering on the infrared image of the current frame by using a Gaussian filter to remove Gaussian white noise in the infrared image; a median filter is used to filter out random punctiform noise in the infrared image.
3. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 1, wherein the method comprises the following steps: judging whether the current frame is the infrared image of the previous five frames, if so, performing coarse elimination, otherwise, performing fine elimination;
dividing the infrared image into subblocks with the same size, and a subblock sequence i 1 To i 25
From i 1 Starting to perform corner detection on each small block by using a Shi-Tomasi algorithm, and sorting the number of the corners in each small block from small to large after the detection is completed;
removing 5 sub-blocks with the largest number and 5 sub-blocks with the smallest concentration, storing the final corner points,
if the image is not the previous five frames, the detection result of the previous frame is fed back to the current frame during detection, and the mask generated by the moving target area of the previous frame is used for not detecting the corner point of the area with the mask area of zero on the current frame;
Figure FDA0004189551510000021
carrying out Shi-Tomasi corner detection on other areas to finish fine elimination, so that the whole detection method realizes closed-loop detection;
and storing the final corner detection result.
4. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 2, wherein the method comprises the following steps:
tracking each obtained corner in the current frame by using LK optical flow pyramid algorithm to obtain the corner P of the previous frame i (x 0 ,y 0 ) At the position P of the current frame i (x 1 ,y 1 ) Repeating the steps until all the corner points are calculated, and storing all the tracking points;
and performing back tracking on the tracking point set by using the LK optical flow pyramid algorithm again to obtain a tracking point P of the current frame i (x 1 ,y 1 ) At position P of the previous frame i (x 2 ,y 2 ) Repeating the steps until all tracking points are calculated, and storing all tracking point pairs which are reversely tracked;
removing the forward tracking point pairs according to the output state vector;
if the output state vector is judged to be 1, the corner point of the previous frame and the corresponding tracking point of the current frame are stored, and if the output state vector is judged to be 0, the corresponding point pair is removed;
performing the same removal strategy on the back tracking set;
screening the forward tracking point pair set and the backward tracking point pair set, and taking out the point P i (x 1 ,y 1 ) Compare P in its corresponding set i (x 0 ,y 0 ) And P i (x 2 ,y 2 ) If the x coordinate and the y coordinate of the two points are the same, the bidirectional tracking is successful, and P is calculated i (x 0 ,y 0 ) And P i (x 1 ,y 1 ) Adding the tracking success set;
repeating the above operation until the screening of all the point pairs is completed.
5. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 3, wherein the method comprises the following steps: p is adopted for two corner sets corresponding to the obtained previous frame and the current frameThe ROSAC algorithm calculates the optimal homography transformation matrix H of two frames of infrared images, and for the corner P of the previous frame i (x 0 ,y 0 ) Tracking point P corresponding to current frame i (x 1 ,y 1 ) The following relationship should be satisfied:
Figure FDA0004189551510000022
and performing background compensation on the current frame by using the optimal homography transformation matrix H, and respectively performing interpolation in the x direction and the y direction of the pixel point by adopting a bilinear interpolation mode so as to perform image correction.
6. The method for closed loop detection of a moving object in an infrared video under a dynamic background according to claim 4, wherein the method comprises the following steps:
performing differential operation on the infrared image of the previous frame and the compensated infrared image, and performing Gaussian filtering on the differential image to remove noise;
threshold segmentation is carried out on the differential image by using an Otsu algorithm, and a binary image suspected to be a moving target is obtained;
performing corrosion operation on the binary image suspected to be the moving target, removing discrete noise and linear noise interference, performing expansion operation, marking and filtering small areas after expansion, and performing expansion operation again to obtain a final moving target binary image;
calculating the outline of the moving object according to the final moving object binary image, and storing the outline;
traversing each target contour, drawing an external rectangle on the infrared image of the current frame according to the contour, and storing the position of the rectangle and the length and width of the rectangle;
repeating the steps until all the outlines are traversed to obtain a final moving target detection result diagram.
7. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 1, wherein the method comprises the following steps: creating a single-channel mask image with the same size and type as the infrared image of the current frame and set to be white;
taking out an unprocessed external rectangular frame from the stored rectangular frame set, acquiring the position and the size of the rectangular frame, expanding the length and the width of the rectangular frame outwards by m pixel points, mapping the expanded rectangular frame into a mask image, setting the gray value of the pixel points in the mask image rectangular frame to be 0, adding the processed rectangular frame into the processed set, continuing to process the next rectangular frame, and repeating the steps until all the rectangular frames are processed;
and obtaining a mask image of the moving object, and feeding back the mask image as initial information to the next frame detection to realize closed loop detection.
CN202310428567.8A 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background Pending CN116385495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310428567.8A CN116385495A (en) 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310428567.8A CN116385495A (en) 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background

Publications (1)

Publication Number Publication Date
CN116385495A true CN116385495A (en) 2023-07-04

Family

ID=86970988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310428567.8A Pending CN116385495A (en) 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background

Country Status (1)

Country Link
CN (1) CN116385495A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116999044A (en) * 2023-09-07 2023-11-07 南京云思创智信息科技有限公司 Real-time motion full-connection bidirectional consistent optical flow field heart rate signal extraction method
CN117671801A (en) * 2024-02-02 2024-03-08 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116999044A (en) * 2023-09-07 2023-11-07 南京云思创智信息科技有限公司 Real-time motion full-connection bidirectional consistent optical flow field heart rate signal extraction method
CN116999044B (en) * 2023-09-07 2024-04-16 南京云思创智信息科技有限公司 Real-time motion full-connection bidirectional consistent optical flow field heart rate signal extraction method
CN117671801A (en) * 2024-02-02 2024-03-08 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction
CN117671801B (en) * 2024-02-02 2024-04-23 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction

Similar Documents

Publication Publication Date Title
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN116385495A (en) Moving target closed-loop detection method of infrared video under dynamic background
CN104978715B (en) Non-local mean image denoising method based on filtering window and parameter self-adaption
CN102103753B (en) Use method and the terminal of real time camera estimation detect and track Moving Objects
CN107369159B (en) Threshold segmentation method based on multi-factor two-dimensional gray level histogram
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN106934806B (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
CN108537787B (en) Quality judgment method for face image
CN111738211B (en) PTZ camera moving object detection and recognition method based on dynamic background compensation and deep learning
CN116416268B (en) Method and device for detecting edge position of lithium battery pole piece based on recursion dichotomy
CN107506795A (en) A kind of local gray level histogram feature towards images match describes sub- method for building up and image matching method
CN111209858A (en) Real-time license plate detection method based on deep convolutional neural network
CN116012579A (en) Method for detecting abnormal states of parts based on photographed images of intelligent inspection robot of train
CN110807763A (en) Method and system for detecting ceramic tile surface bulge
CN112767358A (en) Railway electric locomotive fault detection method based on image feature registration
CN114529715B (en) Image identification method and system based on edge extraction
CN116052152A (en) License plate recognition system based on contour detection and deep neural network
CN116977316A (en) Full-field detection and quantitative evaluation method for damage defects of complex-shape component
CN117911419A (en) Method and device for detecting steel rotation angle enhancement of medium plate, medium and equipment
CN116681721B (en) Linear track detection and tracking method based on vision
CN112288765A (en) Image processing method for vehicle-mounted infrared pedestrian detection and tracking
CN112052859A (en) License plate accurate positioning method and device in free scene
CN117173714A (en) License layout analysis method based on deep learning and traditional algorithm
CN116758266A (en) Reading method of pointer type instrument

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination