CN111597917B - Target detection method based on frame difference method - Google Patents

Target detection method based on frame difference method Download PDF

Info

Publication number
CN111597917B
CN111597917B CN202010337582.8A CN202010337582A CN111597917B CN 111597917 B CN111597917 B CN 111597917B CN 202010337582 A CN202010337582 A CN 202010337582A CN 111597917 B CN111597917 B CN 111597917B
Authority
CN
China
Prior art keywords
frame
image
gray
difference
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010337582.8A
Other languages
Chinese (zh)
Other versions
CN111597917A (en
Inventor
李昌利
何德明
汤世强
张怡彤
敖宇
吴海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202010337582.8A priority Critical patent/CN111597917B/en
Publication of CN111597917A publication Critical patent/CN111597917A/en
Application granted granted Critical
Publication of CN111597917B publication Critical patent/CN111597917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a target detection method based on a frame difference method, which applies an improved design aiming at the frame difference method, firstly groups video streams, then processes each original image frame in each group respectively, and utilizes the difference between the previous image frame and the next image frame to realize high-efficiency detection and identification of a moving target.

Description

Target detection method based on frame difference method
Technical Field
The invention relates to a target detection method based on a frame difference method, and belongs to the technical field of target identification and tracking.
Background
The moving object detection refers to a process of detecting a change area in a sequence image and extracting a moving object from a background image, and is generally implemented by a method of detecting a difference between corresponding pixel points in a current image and a background image, and if the difference is greater than a certain specific value (threshold), the pixel is determined to be a foreground moving object. However, due to the influence of noise, some pixel points from the background area can be detected as a motion area, and certain interference is generated on the detection of the moving object. The noise mainly includes environmental influences such as weather and illumination, and the detection and segmentation of the moving object become quite difficult due to the influence of the environmental factors. According to whether the camera is still or not, motion detection is divided into a static background and a motion background, and the camera of most video monitoring systems is fixed, so that moving object detection under the static background is widely concerned, and one of common methods is an interframe difference method.
The inter-frame difference method is to subtract pixel values of two adjacent frames or two images separated by several frames in a video stream, and perform thresholding on the subtracted images to extract a motion region in the images. Taking two frames as an example, subtracting corresponding pixel values of two adjacent frames of images to obtain a difference image, and then binarizing the difference image, namely under the condition that the change of the environmental brightness is not large, if the change of the corresponding pixel values is larger than a predetermined threshold value, considering the pixel as a background pixel; if the pixel values of the image area vary greatly, it can be considered that this is caused by a moving object.
The traditional interframe difference method is very sensitive to environmental noise, so that if the threshold value is not properly selected, the noise and effective information in an image cannot be effectively separated, and in addition, the method may not be used for completely extracting a moving target for a single-color and large target. Meanwhile, the inter-frame difference may also cause a ghost phenomenon, that is, two moving objects are detected from the result of detecting one moving object.
Disclosure of Invention
The invention aims to solve the technical problem of providing a target detection method based on a frame difference method, which can realize high-efficiency detection and identification aiming at a moving target in a video stream by applying an improved design aiming at the frame difference method, improves the detection accuracy of the moving target and provides technical support for the detection of the moving target in various scenes.
The invention adopts the following technical scheme for solving the technical problems: the invention designs a target detection method based on a frame difference method, which is used for realizing the detection of a moving target aiming at a video stream containing the moving target obtained by a fixed-angle image capturing device, and comprises the following steps:
a, sequentially grouping each original image frame in the video stream according to the number of image frames contained in a preset single group and the preset overlapped frame number between the groups in sequence, obtaining each group corresponding to the video stream, and entering the step B;
b, respectively aiming at each group, executing the following steps B1-B4 to obtain the moving target area image corresponding to each group and the position of the moving target area in the corresponding original image frame, and realizing the identification of the moving target corresponding to the group;
b1, respectively performing drying removal processing and graying processing on each original image frame in the group to obtain each gray image frame, and then entering the step B2;
b2, carrying out pairwise difference processing on each gray image frame in the group to obtain each difference image, and then entering the step B3;
b3, respectively aiming at each differential image, carrying out binarization processing on the differential image based on a gray threshold T between a foreground region and a background region in the differential image, and obtaining a differential binary image corresponding to the differential image; obtaining difference binary images corresponding to the difference images respectively, and then entering step B4;
step B4. is to perform logical and operation processing of pixels for each difference binary image, and the foreground in the resulting image, i.e. the moving object, is obtained, so as to obtain the moving object area image and the position of the moving object area in the corresponding original image frame.
As a preferred technical scheme of the invention: the method also comprises the following steps C to D, and after the step B is executed, the step C is executed;
c, respectively aiming at each group, inserting the moving target area image into the original image frame positioned in the middle of the group according to the position of the moving target area corresponding to the group in the corresponding original image frame, and updating the corresponding original image frame; after the operation of the step for each group is finished, entering the step D;
and D, based on the original image frames in the groups, sequentially playing the groups to realize the tracking of the moving target.
As a preferred technical scheme of the invention: step BC is also included, after step B is executed, the step BC is entered, and after step BC is executed, the step C is entered;
and step BC, aiming at the target area image corresponding to each group, respectively applying mathematical morphology to fill the hole, updating each target area image, and then entering the step C.
As a preferred embodiment of the present invention, in the step B3, the following steps B3-1 to B3-2 are executed for each difference image, so as to obtain a difference binary image corresponding to the difference image; obtaining difference binary images corresponding to the difference images respectively, and then entering step B4;
b3-1, defining a gray value t, and according to the ratio omega of the number of pixels with the gray value less than t in the difference image to the number of all pixels in the difference image 0 The ratio omega of the number of pixels with gray scale value larger than t in the difference image to the number of all pixels in the difference image 1 The following objective function is applied:
G=ω 01 *(μ 01 ) 2
taking the gray value T corresponding to the maximum value G as a gray threshold value T between the foreground area and the background area in the differential image, and then entering step B3-2; wherein, mu 0 Representing the mean gray value, μ, of pixels in the difference image having a gray value less than t 1 Representing the average gray value of each pixel with the gray value larger than t in the differential image;
and B3-2, carrying out binarization processing on the differential image based on the gray threshold T between the foreground region and the background region in the differential image, and obtaining a differential binary image corresponding to the differential image.
As a preferred technical scheme of the invention: in the step B3-2, based on the gray threshold T between the foreground region and the background region in the difference image, if the gray value of the pixel is greater than or equal to T, the gray value of the pixel is defined as 255; and if the pixel gray value is less than T, defining the pixel gray value as 0, thereby carrying out binarization processing on the difference image and obtaining a difference binary image corresponding to the difference image.
As a preferred technical scheme of the invention: the number of image frames contained in the preset single group is 5; the step B also comprises the following steps I to II, and after the step B3 is executed, the steps I to II are executed in sequence;
dividing each differential binary image into a 1-frame interval set, a 2-frame interval set, a 3-frame interval set and a 4-frame interval set according to a frame interval between two gray-scale image frames corresponding to each differential binary image;
step II, aiming at the 1-frame interval set, firstly, aiming at a difference binary image corresponding to a first frame gray image frame and a second frame gray image frame and a difference binary image corresponding to a second frame gray image frame and a third frame gray image frame, executing logic AND operation of pixels between the first frame gray image frame and the second frame gray image frame and logic XOR operation of the pixels between the first frame gray image frame and the second frame gray image frame, and then carrying out logic OR operation on two operation results to obtain a first initial result; then, aiming at a difference binary image corresponding to a third frame gray image frame and a fourth frame gray image frame and a difference binary image corresponding to a fourth frame gray image frame and a fifth frame gray image frame, performing logical AND operation of pixels between the third frame gray image frame and the fourth frame gray image frame and logical XOR operation of pixels between the fourth frame gray image frame and the fifth frame gray image frame, and performing logical OR operation on two operation results to obtain a second initial result; finally, carrying out logical AND operation on the first initial result and the second initial result to obtain a foreground in a result image, namely a moving target corresponding to the 1-frame interval set;
aiming at the 2-frame interval set, firstly, executing logical AND operation of pixels between a differential binary image corresponding to a first frame gray image frame and a third frame gray image frame and a differential binary image corresponding to a second frame gray image frame and a fourth frame gray image frame and logical XOR operation of the pixels between the two frames, and then carrying out logical OR operation on two operation results to obtain a third initial result; then, aiming at a difference binary image corresponding to a third frame gray image frame and a fifth frame gray image frame and a difference binary image corresponding to a second frame gray image frame and a fourth frame gray image frame, performing logical AND operation of pixels between the two frames and logical XOR operation of the pixels between the two frames, and performing logical OR operation on the two operation results to obtain a fourth initial result; finally, carrying out logical AND operation on the third initial result and the fourth initial result to obtain a foreground in a result image, namely a moving target corresponding to the 2-frame interval set;
aiming at the 3-frame interval set, aiming at two difference binary images, performing logic AND operation of pixels between the two difference binary images, and obtaining a foreground in a result image, namely a moving target corresponding to the 3-frame interval set;
aiming at the 4-frame interval set, taking the foreground of the single differential binary image as a moving target corresponding to the 4-frame interval set;
in step B4, the logic and operation processing of pixels is performed on the moving object corresponding to the 1-frame interval set, the moving object corresponding to the 2-frame interval set, the moving object corresponding to the 3-frame interval set, and the moving object corresponding to the 4-frame interval set, and the obtained result image is the moving object, so as to obtain the moving object region image and the position of the moving object region in the corresponding original image frame.
As a preferred technical scheme of the invention: in step B1, for each original image frame in the group, a median filtering method is respectively used to perform the drying process.
Compared with the prior art, the target detection method based on the frame difference method has the following technical effects by adopting the technical scheme:
the invention designs a target detection method based on a frame difference method, applies an improved design aiming at the frame difference method, firstly groups video streams, then respectively processes each original image frame in each group, and utilizes the difference between the previous image frame and the next image frame to realize high-efficiency detection and identification of a moving target.
Drawings
Fig. 1 is a schematic flow chart of a target detection method based on a frame difference method according to the present invention.
Detailed Description
The following description will explain embodiments of the present invention in further detail with reference to the accompanying drawings.
The invention relates to a target detection method based on a frame difference method, which is used for realizing the detection of a moving target in a video stream containing the moving target obtained by a fixed-angle image capturing device, and in practical application, as shown in fig. 1, the following steps A to D are specifically executed.
And step A, sequentially grouping each original image frame in the video stream according to the number of image frames contained in a preset single group and the preset overlapped frame number between the groups in sequence, obtaining each group corresponding to the video stream, and entering the step B.
And step B, executing the following steps B1 to B4 respectively aiming at each group, obtaining the moving target area image corresponding to each group and the position of the moving target area in the corresponding original image frame, realizing the identification of the moving target corresponding to the group, and then entering the step BC.
And B1, performing drying removal processing and graying processing on each original image frame in the group to obtain each gray image frame, and then entering the step B2. In practical applications, such as using median filtering, the original image frame is subjected to a de-drying process.
And B2, carrying out pairwise difference processing on each gray image frame in the group to obtain each difference image, and then entering the step B3.
B3, respectively aiming at each differential image, carrying out binarization processing on the differential image based on a gray threshold T between a foreground region and a background region in the differential image, and obtaining a differential binary image corresponding to the differential image; and then obtaining difference binary images corresponding to the difference images respectively, and then entering step B4.
In practical application, the step B3 is specifically designed to perform the following steps B3-1 to B3-2 for each difference image, respectively, to obtain a difference binary image corresponding to the difference image; then, difference binary images corresponding to the difference images are obtained, and the process then proceeds to step B4.
B3-1, defining a gray value t, and according to the ratio omega of the number of pixels with the gray value less than t in the difference image to the number of all pixels in the difference image 0 The ratio omega of the number of pixels with gray scale value larger than t in the difference image to the number of all pixels in the difference image 1 The following objective function is applied:
G=ω 01 *(μ 01 ) 2
taking the gray value T corresponding to the maximum value G as a gray threshold value T between the foreground area and the background area in the differential image, and then entering the step B3-2; wherein, mu 0 Representing the mean gray value, μ, of pixels in the difference image having a gray value less than t 1 Representing the average gray value of each pixel in the difference image having a gray value greater than t.
In practical applications, for example, the total number of pixels in the difference image is sum, ω 0 Number N of pixels equal to gray value less than t in difference image 0 The ratio of the sum of the number of pixels sum in the difference image, i.e. ω 0 =N 0 /sum;ω 1 Number N of pixels equal to gray value greater than t in difference image 1 The ratio of the sum of the number of pixels sum in the difference image, i.e. ω 1 =N 1 (iv)/sum; then sum is equal to N 0 +N 1 ,ω 01 Further, the total average gray level μ in the difference image is 1 ω 0011 Between-class variance G ═ ω 0 *(μ 0 -μ) 21 *(μ 1 -μ) 2 By arranging the formula, the objective function is obtained: g ═ ω 01 *(μ 01 ) 2 And further the actual execution of the above step B3-1 is completed.
And B3-2, carrying out binarization processing on the differential image based on the gray threshold T between the foreground region and the background region in the differential image, and obtaining a differential binary image corresponding to the differential image. In practical application, based on a gray threshold T between a foreground region and a background region in the differential image, if a pixel gray value is greater than or equal to T, defining the pixel gray value to be 255; and if the pixel gray value is less than T, defining the pixel gray value as 0, thereby carrying out binarization processing on the difference image and obtaining a difference binary image corresponding to the difference image.
Step B4. is to perform logical and operation processing of pixels for each difference binary image, and the foreground in the resulting image, i.e. the moving object, is obtained, so as to obtain the moving object area image and the position of the moving object area in the corresponding original image frame.
And step BC, aiming at the target area images corresponding to the groups, respectively applying mathematical morphology to fill the holes, updating each target area image, perfecting the defect that the target area images have holes by using an interframe difference method, namely improving the definition of the target area images, and then entering the step C.
C, respectively aiming at each group, inserting the moving target area image into the original image frame positioned in the middle of the group according to the position of the moving target area corresponding to the group in the corresponding original image frame, and updating the corresponding original image frame; after the operation of this step for each packet is completed, step D is then entered.
And D, based on the original image frames in the groups, sequentially playing the groups to realize the tracking of the moving target.
The target detection method based on the frame difference method is applied to practice, and as shown in fig. 1, the 5-frame difference method is specifically executed, and the following process is specifically executed.
And step A, sequentially grouping each original image frame in the video stream according to the condition that a preset single group comprises 5 frame images and preset overlapped frame numbers between the sequential front group and the sequential back group to obtain each group corresponding to the video stream, and then entering the step B.
And step B, executing the following steps B1 to B4 respectively aiming at each group, obtaining the moving target area image corresponding to each group and the position of the moving target area in the corresponding original image frame, realizing the identification of the moving target corresponding to the group, and then entering the step BC.
For the groups, steps B1 to B3 are sequentially performed to obtain each difference binary image corresponding to the group, and then steps I to II are sequentially performed as follows.
And I, dividing each differential binary image into a 1-frame interval set, a 2-frame interval set, a 3-frame interval set and a 4-frame interval set according to the frame interval between two gray scale image frames corresponding to each differential binary image.
Step II, aiming at the 1-frame interval set, firstly, aiming at a difference binary image corresponding to a first frame gray image frame and a second frame gray image frame and a difference binary image corresponding to a second frame gray image frame and a third frame gray image frame, executing logic AND operation of pixels between the first frame gray image frame and the second frame gray image frame and logic XOR operation of the pixels between the first frame gray image frame and the second frame gray image frame, and then carrying out logic OR operation on two operation results to obtain a first initial result; then, aiming at a difference binary image corresponding to a third frame gray image frame and a fourth frame gray image frame and a difference binary image corresponding to a fourth frame gray image frame and a fifth frame gray image frame, performing logical AND operation of pixels between the third frame gray image frame and the fourth frame gray image frame and logical XOR operation of pixels between the fourth frame gray image frame and the fifth frame gray image frame, and performing logical OR operation on two operation results to obtain a second initial result; finally, carrying out logical AND operation on the first initial result and the second initial result to obtain a foreground in a result image, namely a moving target corresponding to the 1-frame interval set;
aiming at the 2-frame interval set, firstly, executing logical AND operation of pixels between a differential binary image corresponding to a first frame gray image frame and a third frame gray image frame and a differential binary image corresponding to a second frame gray image frame and a fourth frame gray image frame and logical XOR operation of the pixels between the two frames gray image frames and the fourth frame gray image frame, and then carrying out logical OR operation on two operation results to obtain a third initial result; then, aiming at a difference binary image corresponding to a third frame gray image frame and a fifth frame gray image frame and a difference binary image corresponding to a second frame gray image frame and a fourth frame gray image frame, performing logical AND operation of pixels between the two frames and logical XOR operation of the pixels between the two frames, and performing logical OR operation on the two operation results to obtain a fourth initial result; finally, carrying out logical AND operation on the third initial result and the fourth initial result to obtain a foreground in a result image, namely a moving target corresponding to the 2-frame interval set;
aiming at the 3-frame interval set, aiming at two difference binary images, performing logic AND operation of pixels between the two difference binary images, and obtaining a foreground in a result image, namely a moving target corresponding to the 3-frame interval set;
and regarding the 4-frame interval set, taking the foreground of the single differential binary image as a moving object corresponding to the 4-frame interval set.
After the above steps I to II are performed, the following design operation of step B4 is continuously performed.
Step B4. is to execute the logical and operation processing of pixels for the moving object corresponding to the 1 frame interval set, the moving object corresponding to the 2 frame interval set, the moving object corresponding to the 3 frame interval set, and the moving object corresponding to the 4 frame interval set, and the obtained result image is the moving object, so as to obtain the moving object area image and the position of the moving object area in the corresponding original image frame.
After the step B is executed, the step BC is continued, and the steps C to D are continued, that is, the tracking of the moving object is realized.
The technical scheme is designed into a target detection method based on a frame difference method, an improved design aiming at the frame difference method is applied, firstly, video streams are grouped, then, each original image frame in each group is processed, efficient detection and identification of a moving target are achieved by utilizing the difference between the previous image frame and the next image frame, the whole target detection method can reduce manpower and material resources for monitoring and searching related personnel, provides technical support for detection of the moving target in various scenes, and can effectively improve the detection accuracy of the moving target.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (6)

1. A target detection method based on a frame difference method is used for realizing the detection of a moving target in a video stream containing the moving target obtained by a fixed-angle image capture device, and is characterized by comprising the following steps:
a, sequentially grouping each original image frame in the video stream according to the number of image frames contained in a preset single group and the preset overlapped frame number between the groups in sequence, obtaining each group corresponding to the video stream, and entering the step B;
b, respectively aiming at each group, executing the following steps B1-B4 to obtain the moving target area image corresponding to each group and the position of the moving target area in the corresponding original image frame, and realizing the identification of the moving target corresponding to the group;
b1, respectively performing drying removal processing and graying processing on each original image frame in the group to obtain each gray image frame, and then entering the step B2;
b2, carrying out pairwise difference processing on each gray image frame in the group to obtain each difference image, and then entering the step B3;
b3, respectively aiming at each differential image, carrying out binarization processing on the differential image based on a gray threshold T between a foreground region and a background region in the differential image, and obtaining a differential binary image corresponding to the differential image; obtaining difference binary images corresponding to the difference images respectively, and then entering step B4;
the number of image frames contained in the preset single packet is 5; the step B also comprises the following steps I to II, and after the step B3 is executed, the steps I to II are executed in sequence;
dividing each differential binary image into a 1-frame interval set, a 2-frame interval set, a 3-frame interval set and a 4-frame interval set according to a frame interval between two gray-scale image frames corresponding to each differential binary image;
step II, aiming at the 1 frame interval set, firstly, executing the logical AND operation of pixels between the first frame gray image frame and the second frame gray image frame and the differential binary image corresponding to the second frame gray image frame and the third frame gray image frame and the logical XOR operation of the pixels between the first frame gray image frame and the second frame gray image frame, and then carrying out the logical OR operation on the two operation results to obtain a first initial result; then, aiming at a difference binary image corresponding to a third frame gray image frame and a fourth frame gray image frame and a difference binary image corresponding to a fourth frame gray image frame and a fifth frame gray image frame, performing logical AND operation of pixels between the third frame gray image frame and the fourth frame gray image frame and logical XOR operation of pixels between the fourth frame gray image frame and the fifth frame gray image frame, and performing logical OR operation on two operation results to obtain a second initial result; finally, performing logical AND operation on the first initial result and the second initial result to obtain a foreground in a result image, namely a moving target corresponding to the 1-frame interval set;
aiming at the 2-frame interval set, firstly, executing logical AND operation of pixels between a differential binary image corresponding to a first frame gray image frame and a third frame gray image frame and a differential binary image corresponding to a second frame gray image frame and a fourth frame gray image frame and logical XOR operation of the pixels between the two frames gray image frames and the fourth frame gray image frame, and then carrying out logical OR operation on two operation results to obtain a third initial result; then, aiming at a difference binary image corresponding to a third frame gray image frame and a fifth frame gray image frame and a difference binary image corresponding to a second frame gray image frame and a fourth frame gray image frame, performing logical AND operation of pixels between the two frames and logical XOR operation of the pixels between the two frames, and performing logical OR operation on the two operation results to obtain a fourth initial result; finally, carrying out logical AND operation on the third initial result and the fourth initial result to obtain a foreground in a result image, namely a moving target corresponding to the 2-frame interval set;
aiming at the 3-frame interval set, aiming at two difference binary images, performing logic AND operation of pixels between the two difference binary images, and obtaining a foreground in a result image, namely a moving target corresponding to the 3-frame interval set;
aiming at the 4-frame interval set, taking the foreground of the single differential binary image as a moving target corresponding to the 4-frame interval set;
b4., performing logical and operation processing on the pixels of each difference binary image to obtain a foreground in the resulting image, i.e. the moving object, and further obtaining an image of the moving object region and a position of the moving object region in the corresponding original image frame; in step B4, the logic and operation processing of pixels is performed on the moving object corresponding to the 1-frame interval set, the moving object corresponding to the 2-frame interval set, the moving object corresponding to the 3-frame interval set, and the moving object corresponding to the 4-frame interval set, and the obtained result image is the moving object, so as to obtain the moving object region image and the position of the moving object region in the corresponding original image frame.
2. The method of claim 1, wherein the frame differencing-based target detection method comprises: the method also comprises the following steps C to D, and after the step B is executed, the step C is executed;
c, respectively aiming at each group, inserting the moving target area image into the original image frame positioned in the middle of the group according to the position of the moving target area corresponding to the group in the corresponding original image frame, and updating the corresponding original image frame; after the operation of the step for each group is finished, entering the step D;
and D, based on the original image frames in the groups, sequentially playing the groups to realize the tracking of the moving target.
3. The method for detecting an object based on a frame differencing method according to claim 2, wherein the frame differencing method comprises the steps of; step BC is also included, after step B is executed, the step BC is entered, and after step BC is executed, the step C is entered;
and step BC, aiming at the target area image corresponding to each group, respectively applying mathematical morphology to fill the hole, updating each target area image, and then entering the step C.
4. The method for target detection based on frame differencing of claim 1 wherein in step B3, the following steps B3-1 to B3-2 are performed for each difference image to obtain a difference binary image corresponding to the difference image; obtaining difference binary images corresponding to the difference images respectively, and then entering step B4;
b3-1, defining gray value t, according to the number of pixels with gray value less than t in the difference image and the gray value tRatio omega of total pixel number in difference image 0 The ratio omega of the number of pixels with gray scale value larger than t in the difference image to the number of all pixels in the difference image 1 The following objective function is applied:
G=ω 01 *(μ 01 ) 2
taking the gray value T corresponding to the maximum value G as a gray threshold value T between the foreground area and the background area in the differential image, and then entering step B3-2; wherein, mu 0 Representing the mean gray value, μ, of pixels in the difference image having a gray value less than t 1 Representing the average gray value of each pixel with the gray value larger than t in the differential image;
and B3-2, carrying out binarization processing on the differential image based on the gray threshold T between the foreground region and the background region in the differential image, and obtaining a differential binary image corresponding to the differential image.
5. The method of claim 4, wherein the frame differencing-based target detection method comprises: in the step B3-2, based on the gray threshold T between the foreground region and the background region in the difference image, if the gray value of the pixel is greater than or equal to T, the gray value of the pixel is defined as 255; and if the pixel gray value is less than T, defining the pixel gray value as 0, thereby carrying out binarization processing on the difference image and obtaining a difference binary image corresponding to the difference image.
6. The method of claim 1, wherein the frame differencing-based target detection method comprises: in step B1, for each original image frame in the group, a median filtering method is respectively used to perform the drying process.
CN202010337582.8A 2020-04-26 2020-04-26 Target detection method based on frame difference method Active CN111597917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010337582.8A CN111597917B (en) 2020-04-26 2020-04-26 Target detection method based on frame difference method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010337582.8A CN111597917B (en) 2020-04-26 2020-04-26 Target detection method based on frame difference method

Publications (2)

Publication Number Publication Date
CN111597917A CN111597917A (en) 2020-08-28
CN111597917B true CN111597917B (en) 2022-08-05

Family

ID=72185117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010337582.8A Active CN111597917B (en) 2020-04-26 2020-04-26 Target detection method based on frame difference method

Country Status (1)

Country Link
CN (1) CN111597917B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611233A (en) * 2015-12-18 2016-05-25 航天恒星科技有限公司 Online video monitoring method for static scene
WO2018058530A1 (en) * 2016-09-30 2018-04-05 富士通株式会社 Target detection method and device, and image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611233A (en) * 2015-12-18 2016-05-25 航天恒星科技有限公司 Online video monitoring method for static scene
WO2018058530A1 (en) * 2016-09-30 2018-04-05 富士通株式会社 Target detection method and device, and image processing apparatus

Also Published As

Publication number Publication date
CN111597917A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
Chen et al. Robust video content alignment and compensation for rain removal in a cnn framework
US7742650B2 (en) Object detection in images
US11417124B2 (en) System for real-time automated segmentation and recognition of vehicle's license plates characters from vehicle's image and a method thereof
WO2022027931A1 (en) Video image-based foreground detection method for vehicle in motion
Zhou et al. Edge based binarization for video text images
KR20130105952A (en) Method and apparatus for vehicle license plate recognition
CN104766344B (en) Vehicle checking method based on movement edge extractor
CN104537622A (en) Method and system for removing raindrop influence in single image
CN108335268B (en) Color image deblurring method based on blind deconvolution
Angelo A novel approach on object detection and tracking using adaptive background subtraction method
KR101296318B1 (en) Apparatus and method for object tracking by adaptive block partitioning
Zhang et al. An optical flow based moving objects detection algorithm for the UAV
CN111597917B (en) Target detection method based on frame difference method
CN111160340A (en) Moving target detection method and device, storage medium and terminal equipment
CN111339824A (en) Road surface sprinkled object detection method based on machine vision
Zhang et al. Moving shadow removal algorithm based on HSV color space
Angeline et al. Tracking and localisation of moving vehicle license plate via Signature Analysis
CN109784176B (en) Vehicle-mounted thermal imaging pedestrian detection Rois extraction method and device
CN111739059A (en) Moving object detection method and track tracking method based on frame difference method
CN111724415A (en) Video image-based multi-target motion detection and tracking method in fixed scene
CN112819820B (en) Road asphalt repairing and detecting method based on machine vision
Nayak et al. Text extraction from natural scene images using adaptive methods
Asundi et al. Raindrop detection algorithm for ADAS
CN112258548B (en) Moving target extraction method based on improved ViBe algorithm
Raviprakash et al. Moving object detection for content based video retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant