CN112967321A - Moving object detection method and device, terminal equipment and storage medium - Google Patents

Moving object detection method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112967321A
CN112967321A CN202110244248.2A CN202110244248A CN112967321A CN 112967321 A CN112967321 A CN 112967321A CN 202110244248 A CN202110244248 A CN 202110244248A CN 112967321 A CN112967321 A CN 112967321A
Authority
CN
China
Prior art keywords
image
frame
difference
background
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110244248.2A
Other languages
Chinese (zh)
Inventor
李志华
张见雨
张红卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Engineering
Original Assignee
Hebei University of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Engineering filed Critical Hebei University of Engineering
Priority to CN202110244248.2A priority Critical patent/CN112967321A/en
Publication of CN112967321A publication Critical patent/CN112967321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the invention provides a method for detecting a moving target, which comprises the following steps: acquiring a plurality of video frame images of a moving target within preset time; determining background images corresponding to the plurality of video frame images by using an average background method; determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference; and filtering the differential image to obtain a binary image of the moving target, so as to detect the moving target based on the binary image. The background image obtained by the improved mean background extraction method is added to the moving target detection algorithm in the three-frame difference process, so that the problems of incomplete detection results and 'holes' in the moving target detection process are solved, and the speed and the accuracy of the moving target detection are improved.

Description

Moving object detection method and device, terminal equipment and storage medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for detecting a moving target, terminal equipment and a storage medium.
Background
Along with popularization and popularization of intelligent monitoring, community safety is effectively guaranteed, and how to utilize monitoring to finish pedestrian detection is of great importance to security and protection work. In the prior art, an interframe difference method, a background subtraction method and an optical flow method are often adopted to detect a moving object in a video.
The interframe difference method is to select two or three frames from continuous video frames and extract a motion foreground through interproximal difference and thresholding. The frame difference method has the obvious characteristics of simpler algorithm principle and better rapidity. When the monitoring environment changes, the frame difference method can also detect the moving target; the background subtraction method is to segment foreground pixels by the difference between background pixels and foreground pixels, namely, a background parameter model is used to approximate the pixel value of a background image, and the key is the establishment of the parameter model; the optical flow method adopts the optical flow characteristics of the target in a time domain to detect the target, and realizes target extraction by a displacement vector optical flow field, thereby effectively tracking a moving target.
The above methods cannot solve the problems of incomplete detection results and existence of "holes" which often occur in moving target detection.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for detecting a moving object, a terminal device, and a storage medium, so as to solve the problems of incomplete detection result, voids, and low detection accuracy in the prior art.
A first aspect of an embodiment of the present invention provides a method for detecting a moving target, including:
acquiring a plurality of video frame images of a moving target within preset time;
determining background images corresponding to the plurality of video frame images by using an average background method;
determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference;
and filtering the differential image to obtain a binary image of the moving target, so as to detect the moving target based on the binary image.
In an embodiment, the determining the background images corresponding to the plurality of video frame images by using an average background method includes:
acquiring continuous front k frame images in a video and video frame numbers of the continuous front k frame images;
adding the continuous front k frames of images to obtain a mixed image;
and obtaining the background image according to the mixed image and the video frame number.
In an embodiment, the determining a difference image according to the background image and a modified three-frame difference method includes:
acquiring a kth frame image and a (k + 1) th frame image;
and performing interframe difference on the kth frame image, the kth +1 frame image and the background image by using the improved three-frame difference method to obtain a difference image.
In an embodiment, the difference image at least includes a first difference image and a second difference image, and the inter-frame difference of the k frame image, the k +1 frame image and the background image by using the improved three-frame difference method to obtain the difference image includes:
taking the k frame image as a first frame image, taking the background image as a second frame image and taking the (k + 1) th frame image as a third frame image;
carrying out difference operation on the first frame image and the second frame image to obtain a first difference image;
and carrying out difference operation on the second frame image and the third frame image to obtain a second difference image.
In an embodiment, the processing the difference image to obtain a binarized image of the moving target includes:
performing thresholding operation on the first differential image and the second differential image respectively to obtain a binary image corresponding to the first differential image and a binary image corresponding to the second differential image;
performing an and operation on the binary image corresponding to the first differential image and the binary image corresponding to the second differential image to obtain an operated binary image;
and performing morphological operation on the operated binary image to obtain the binary image of the moving target.
In an embodiment, the thresholding the first difference image and the second difference image respectively to obtain the binarized image corresponding to the first difference image and the binarized image corresponding to the second difference image includes:
adopting at least one of fixed threshold operation and adaptive threshold operation to eliminate pixel points which do not meet first preset pixels in the first differential image to obtain a binary image corresponding to the first differential image;
and eliminating pixel points which do not meet a second preset pixel in the second differential image by adopting at least one of fixed threshold operation and adaptive threshold operation to obtain a binary image corresponding to the second differential image.
In an embodiment, the performing an and operation on the binarized image corresponding to the first difference image and the binarized image corresponding to the second difference image to obtain an operated binarized image includes:
and performing an and operation on pixel points in the binarized image corresponding to the first differential image and pixel points in the binarized image corresponding to the second differential image to obtain an operated binarized image, wherein the pixel points in the binarized image corresponding to the first differential image and the pixel points in the binarized image corresponding to the second differential image correspond to each other one by one.
A second aspect of the embodiments of the present invention provides a device for detecting a moving object, the device including:
the acquisition module is used for acquiring a plurality of video frame images of a moving target within preset time;
the background image determining module is used for determining background images corresponding to the plurality of video frame images by using an average background method;
the interframe difference module is used for determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method takes the background image as a second frame in the three-frame difference method to carry out interframe difference;
and the target image determining module is used for filtering the differential image to obtain a binary image of the moving target so as to detect the moving target based on the binary image.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for detecting a moving object according to any one of the above descriptions when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for detecting a moving object as described in any one of the above.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a method for detecting a moving target, which comprises the following steps: acquiring a plurality of video frame images of a moving target within preset time; determining background images corresponding to the plurality of video frame images by using an average background method; determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference; and filtering the differential image to obtain a binary image of the moving target, so as to detect the moving target based on the binary image. The background image obtained by the improved mean background extraction method is added to the moving target detection algorithm in the three-frame difference process, so that the problems of incomplete detection results and 'holes' in the moving target detection process are solved, and the speed and the accuracy of the moving target detection are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for detecting a moving object according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of the implementation of the refinement step of S102 in the embodiment of the present invention;
fig. 3 is a background image obtained by an average background method in the embodiment of the present invention;
FIG. 4 is a schematic diagram of a flow chart of implementing the step of refining S103 in the embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating an implementation of a method for detecting a moving object according to another embodiment of the present invention;
FIG. 6 is a current video frame image before detection in an embodiment of the present invention;
FIG. 7 is a binarized image of a moving object after detection in an embodiment of the present invention;
FIG. 8 is a flowchart illustrating the refinement steps of S504 and S505 in the embodiment of the present invention
Fig. 9 is a schematic structural diagram of a moving object detection apparatus according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a schematic diagram of a scene reduction method according to an embodiment of the present invention. As shown in fig. 1, a method for detecting a moving object according to this embodiment includes:
step S101: acquiring a plurality of video frame images of a moving target within preset time;
step S102: and determining background images corresponding to the plurality of video frame images by using an average background method.
In an embodiment, the background of the present embodiment is an outdoor surveillance video, and it is generally considered that the background of the video does not change abruptly. The test was performed using samples in the UCF dataset for a total of 510 frames, a frame rate of 15 frames per second, and an image size of 320 × 240. The continuous 300 video frame images, namely the continuous 300 frames in the 510 frame images, can be extracted, and the continuous 300 video frame images are processed by adopting an average background method to obtain the corresponding background images. Or, in 510 frames of images, two selected 200 consecutive frames are processed by using an average background method to obtain corresponding background images.
Step S103: determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference;
step S104: and filtering the differential image to obtain a binary image of the moving target, so as to detect the moving target based on the binary image.
In one embodiment, since the three-frame difference in the prior art is only a simple subtraction of the neighboring images, for the case of stable scene illumination, the difference d of the unchanged area (i.e. the video background) in the image sequence is zero, and the difference d of the changed area (i.e. the moving object) is generally not zero. Therefore, the application adopts an improved three-frame difference method to process the corresponding background image to determine the difference image. Then, threshold processing, and operation logical operation and morphological operation are sequentially carried out on the difference image so as to determine a binary image of the moving target. According to the method and the device, the improved three-frame difference method is adopted to process the background image, and the speed and the accuracy of detection of the moving target can be improved. The moving object may be a person, an animal, or a vehicle in the image in the video frame, and is not limited in this respect.
The embodiment of the invention provides a method for detecting a moving target, which comprises the following steps: acquiring a plurality of video frame images of a moving target within preset time; determining background images corresponding to the plurality of video frame images by using an average background method; determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference; and filtering the differential image to obtain a binary image of the moving target, so as to detect the moving target based on the binary image. The background image obtained by the improved mean background extraction method is added to the moving target detection algorithm in the three-frame difference process, so that the problems of incomplete detection results and 'holes' in the moving target detection process are solved, and the speed and the accuracy of the moving target detection are improved.
Fig. 2 is a schematic flowchart of a step of refining step S102 in the embodiment of the present invention, and as shown in fig. 2, step S102 includes:
step S201: acquiring continuous front k frame images in a video and video frame numbers of the continuous front k frame images;
step S202: adding the continuous front k frames of images to obtain a mixed image;
step S203: and obtaining the background image according to the mixed image and the video frame number.
In one embodiment, K frames of images before the video are accumulated and then averaged to obtain a background image of the video. The average background (namely background image) of the video frames can be obtained by adding and averaging the continuous video frame images in a period of time, and the rapid detection of the moving target can be realized.
The specific formula is as shown in formula (1):
Figure BDA0002963491880000071
wherein, AvgkRepresenting the mean background calculated using K frames of images before the video, K being the number of statistical video frames, fkRepresenting the K-th frame of a plurality of consecutive video images in the sequence of video images. K is greater than or equal to 1 and less than or equal to the total frame number of the video.
The research background of the application is outdoor monitoring video, and generally, the background of the video is not suddenly changed. If the moving object stays at a certain point for a long time, the pixel value of the 'background' obtained by accumulating and averaging the pixels at the point at the staying position is higher, and a null value is generated by subtracting the pixel value from the current frame image. If the sample does not stay, the sample can be diluted after accumulation and averaging, and the accurate value is high. Referring to fig. 3, in an embodiment, the first two 300 frames of images are selected to be accumulated and averaged to obtain an image as a background, and the image is a background image obtained by using an average background method.
Fig. 4 is a schematic flowchart of a refinement step of step S103 in the embodiment of the present invention, and as shown in fig. 4, step S103 includes:
step S401: acquiring a kth frame image and a (k + 1) th frame image;
step S402: taking the k frame image as a first frame image, taking the background image as a second frame image and taking the (k + 1) th frame image as a third frame image;
step S403: carrying out difference operation on the first frame image and the second frame image to obtain a first difference image;
step S404: and carrying out difference operation on the second frame image and the third frame image to obtain a second difference image.
In an embodiment, a video frame selected in an original three-frame difference method is changed, that is, a current frame (i.e., a k-th frame image) of the video is used as a first frame image, a background image obtained by using an average background method is used as a second frame image, and a next frame (i.e., a k + 1-th frame image) of the current frame is used as a third frame to perform pairwise difference. Optionally, the first frame image and the second frame image are subjected to difference operation, and the second frame image and the third frame image are subjected to difference operation.
In particular, three images, i.e. I, of a sequence of consecutive images are selectedk-1(x,y),Ik(x,y),Ik+1(x, y) wherein Ik-1(x, y) is the first frame image, Ik(x, y) is the second frame image, Ik+1(x, y) is the third frame image. And respectively carrying out adjacent difference on the three frames of images: that is, the current video frame (first frame image) is differentiated from the acquired background image frame (second frame image) to obtain a first differential image a1, and the second video frame image is differentiated from the acquired background image frame to obtain a second differential image a 2.
Figure BDA0002963491880000081
Wherein d is(k,k-1)(x, y) is the first differential image A1, d(k+1,k)(x, y) is the second differential image a 2.
Fig. 5 is a schematic diagram of a scene reduction method according to an embodiment of the present invention. As shown in fig. 5, a method for detecting a moving object according to this embodiment includes:
step S501: acquiring a plurality of video frame images of a moving target within preset time;
step S502: determining background images corresponding to the plurality of video frame images by using an average background method;
step S503: determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference;
step S504: performing thresholding operation on the first differential image and the second differential image respectively to obtain a binary image corresponding to the first differential image and a binary image corresponding to the second differential image;
step S505: performing an and operation on the binary image corresponding to the first differential image and the binary image corresponding to the second differential image to obtain an operated binary image;
step S506: and performing morphological operation on the operated binary image to obtain the binary image of the moving target.
In one embodiment, the thresholding operation is specifically: the first differential image a1 and the second differential image a2 are subjected to thresholding operations, respectively, to obtain corresponding binarized images B1 and B2. However, considering that there is generally much noise in an actual scene, there may be points where the difference is not zero in the non-moving object region. Therefore, thresholding is required after obtaining the difference image to filter out these noise points. The thresholding of the obtained difference image can be realized through the formula (3) in the application, and the corresponding binarization image is finally obtained, wherein the binarization threshold Td in the application file is obtained through multiple simulation experiments.
Specifically, the first difference image d(k,k-1)(x, y) thresholding, i.e. the first difference image d(k,k-1)The first differential image d is represented by 1, where the pixel value of (x, y) whose pixel is equal to or greater than the binarization threshold Td(k,k-1)The pixel value of the (x, y) pixel smaller than the binarization threshold Td is set to 0; second difference image d(k+1,k)(x, y) thresholding, i.e. the second difference image d(k+1,k)The pixel value of the (x, y) pixel equal to or greater than the binarization threshold Td is set to 1, and the second differential image d is divided into two(k+1,k)The pixel value of the (x, y) pixel smaller than the binarization threshold Td is set to 0. Here, the binarization threshold Td is set to 65, and when the value of more than 65 pixels and less than 255 pixels is set to 1, the value of less than 65 pixels is set to 0.
Figure BDA0002963491880000091
Wherein, b(k,k-1)(x, y) is the first difference image d(k,k-1)(x, y) corresponding binarized image B1, B(k+1,k)(x, y) is the second difference image d(k+1,k)(x, y) corresponding binarized image B2.
To further improve the integrity of the detection results, the improved three-frame difference method also utilizes the inter-frame similarity of the continuous image sequence by logically and-ing the two adjacently differentiated and thresholded binarized images B1 and B2. Specifically, as shown in equation (4), the background portion pixel value is set to 0, and the moving object portion pixel value is set to 1.
Figure BDA0002963491880000092
Wherein, Bk(x, y) is a binarized image b corresponding to the first difference image(k,k-1)(x, y) binarized image b corresponding to the second difference image(k+1,k)(x, y) and (x, y) to obtain an image.
The detection result is more complete by performing logical AND operation on the binary images B1 and B2. Here, the resulting image was convolved with an elliptical convolution kernel of 3 × 3 size to eliminate the noise effect.
Further, the binary images B1 and B2 are logically anded, and the obtained binary images after the operation are morphologically operated, so that the binary images of the moving object can be obtained. And the morphological operation enlarges a connected domain of the detection result of the moving target through the expansion corrosion operation, namely the connected domain is the binary image of the moving target. Optionally, 2 iterative expansion operations are performed here to increase the connected region.
In an embodiment, with reference to fig. 6-7, fig. 6 shows that the 10 th frame image is used as the first frame image, the background image is used as the second frame image, and the 11 th frame image is used as the third frame image, and inter-frame difference processing is performed to obtain two corresponding difference images, and then thresholding, logical and operation, and morphological operation are sequentially performed on the two corresponding difference images to obtain the detected effect image shown in fig. 7, that is, the binarized image of the moving object.
Fig. 8 is a schematic flowchart of the refinement steps of S504 and S505 in the embodiment of the present invention, and as shown in fig. 8, S504 and S505 include:
step S801: adopting at least one of fixed threshold operation and adaptive threshold operation to eliminate pixel points which do not meet first preset pixels in the first differential image to obtain a binary image corresponding to the first differential image;
step S802: adopting at least one of fixed threshold operation and adaptive threshold operation to eliminate pixel points which do not meet a second preset pixel in the second differential image to obtain a binary image corresponding to the second differential image;
step S803: and performing an and operation on pixel points in the binarized image corresponding to the first differential image and pixel points in the binarized image corresponding to the second differential image to obtain an operated binarized image, wherein the pixel points in the binarized image corresponding to the first differential image and the pixel points in the binarized image corresponding to the second differential image correspond to each other one by one.
In one embodiment, the threshold may be considered the simplest image segmentation method. Such as segmenting our desired object portion (which may be a part or whole) from an image. The method is based on the grey value difference between the object in the image and the background, and the segmentation belongs to the segmentation of pixel level, the grey value of each pixel point in the image is compared with a given threshold value, and corresponding judgment is given (the grey value of the segmented object is specified, such as black or white). According to the method, pixel points which meet a first preset pixel in a first differential image A1 are reserved, pixel points which do not meet the first preset pixel are removed, and a corresponding binary image B1 is obtained; and reserving pixel points which meet the second preset pixel in the first differential image A2, and eliminating pixel points which do not meet the second preset pixel to obtain a corresponding binary image B2. The first preset pixel and the second preset pixel are set according to specific conditions, and the first preset pixel and the second preset pixel can be the same.
Further, an and operation is performed on the resulting binarized images B1 and B2 to determine the operated binarized images. Specifically, the method comprises the following steps: the binary image B1 comprises 3 pixel points B11, B12 and B13, the binary image B2 comprises 3 pixel points B21, B22 and B23, and under the condition that the binary images B1 and B2 are subjected to AND operation, B11 and B21 are subjected to AND operation, B12 and B22 are subjected to AND operation, and B13 and B23 are subjected to AND operation, and the pixel points C1, C2 and C3 are obtained respectively. The image composed of the pixel points C1, C2, and C3 is the binarized image after the operation. In addition, the number of the pixel points in the above embodiment is only an example, and the specific number is set according to an actual situation.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, as shown in fig. 9, there is provided a moving object detecting apparatus including: an obtaining module 91, a background image determining module 92, an inter-frame difference module 93 and a target image determining module 94, wherein:
an obtaining module 91, configured to obtain a plurality of video frame images of a moving object within a preset time;
a background image determining module 92, configured to determine background images corresponding to the multiple video frame images by using an average background method;
an inter-frame difference module 93, configured to determine a difference image according to the background image and an improved three-frame difference method, where the improved three-frame difference method performs inter-frame difference on the background image as a second frame in the three-frame difference method;
a target image determining module 94, configured to filter the difference image to obtain a binarized image of the moving target, so as to detect the moving target based on the binarized image.
In one embodiment, the plurality of video frame images includes consecutive top k frame images in the video, and the background image determination module 92 includes:
the first image acquisition module is used for acquiring continuous front k frame images in a video and video frame numbers of the continuous front k frame images;
the addition operation module is used for performing addition operation on the continuous previous k frames of images to obtain a mixed image;
and the average calculation module is used for obtaining the background image according to the mixed image and the video frame number.
In one embodiment, the inter-frame difference module 93 includes:
the second image acquisition module is used for acquiring a kth frame image and a (k + 1) th frame image;
and the interframe difference calculation module is used for carrying out interframe difference on the k frame image, the k +1 frame image and the background image by using the improved three-frame difference method to obtain a difference image.
In one embodiment, the difference image includes at least a first difference image and a second difference image, and the inter-frame difference calculating module includes:
a third image obtaining module, configured to use the k frame image as a first frame image, use the background image as a second frame image, and use the (k + 1) th frame image as a third frame image;
the first differential image operation module is used for carrying out differential operation on the first frame image and the second frame image to obtain a first differential image;
and the second difference image operation module is used for carrying out difference operation on the second frame image and the third frame image to obtain a second difference image.
In one embodiment, the target image determination module 94 includes:
the thresholding operation module is used for respectively carrying out thresholding operation on the first differential image and the second differential image to obtain a binary image corresponding to the first differential image and a binary image corresponding to the second differential image;
the operation module is used for performing AND operation on the binary image corresponding to the first difference image and the binary image corresponding to the second difference image to obtain an operated binary image;
and the morphological operation module is used for performing morphological operation on the operated binary image to obtain the binary image of the moving target.
In one embodiment, the thresholding operation module includes:
the first binarized image determining module is used for eliminating pixel points which do not meet first preset pixels in the first differential image by adopting at least one of fixed threshold operation and adaptive threshold operation to obtain a binarized image corresponding to the first differential image;
and the second binarization image determining module is used for eliminating pixel points which do not meet second preset pixels in the second difference image by adopting at least one of fixed threshold operation and adaptive threshold operation to obtain a binarization image corresponding to the second difference image.
In one embodiment, the and operation module includes:
and the operation and operation module is used for performing and operation on pixel points in the binarized image corresponding to the first difference image and pixel points in the binarized image corresponding to the second difference image to obtain an operated binarized image, wherein the pixel points in the binarized image corresponding to the first difference image and the pixel points in the binarized image corresponding to the second difference image are in one-to-one correspondence.
Fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 10, the terminal device 10 of this embodiment includes: a processor 1001, a memory 1002 and a computer program 1003 stored in said memory 1002 and executable on said processor 1001. The processor 1001, when executing the computer program 1003, implements the steps in the various scene reduction method embodiments described above, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 1001, when executing the computer program 1003, implements the functions of the modules/units in the device embodiments, such as the functions of the modules 91 to 94 shown in fig. 9.
Illustratively, the computer program 1003 may be divided into one or more modules/units, which are stored in the memory 1002 and executed by the processor 1001 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 1003 in the terminal device 10. For example, the computer program 1003 may be divided into an obtaining module, a clustering module and a scene reduction module, and each module specifically functions as follows:
the acquisition module is used for acquiring a plurality of video frame images of a moving target within preset time;
the background image determining module is used for determining background images corresponding to the plurality of video frame images by using an average background method;
the interframe difference module is used for determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method takes the background image as a second frame in the three-frame difference method to carry out interframe difference;
and the target image determining module is used for filtering the differential image to obtain a binary image of the moving target so as to detect the moving target based on the binary image.
The terminal device 10 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. The 10 terminal devices may include, but are not limited to, a processor 1001 and a memory 1002. Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 1001 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1002 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 1002 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 10. Further, the memory 1002 may also include both an internal storage unit and an external storage device of the terminal device 10. The memory 1002 is used for storing the computer programs and other programs and data required by the terminal device. The memory 1002 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for detecting a moving object, comprising:
acquiring a plurality of video frame images of a moving target within preset time;
determining background images corresponding to the plurality of video frame images by using an average background method;
determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method uses the background image as a second frame in the three-frame difference method to carry out interframe difference;
and filtering the differential image to obtain a binary image of the moving target, so as to detect the moving target based on the binary image.
2. The method for detecting a moving object according to claim 1, wherein the plurality of video frame images include k consecutive previous frame images in a video, and the determining the background images corresponding to the plurality of video frame images by using an average background method includes:
acquiring continuous front k frame images in a video and video frame numbers of the continuous front k frame images;
adding the continuous front k frames of images to obtain a mixed image;
and obtaining the background image according to the mixed image and the video frame number.
3. The method for detecting a moving object according to claim 2, wherein said determining a difference image based on said background image and a modified three-frame difference method comprises:
acquiring a kth frame image and a (k + 1) th frame image;
and performing interframe difference on the kth frame image, the kth +1 frame image and the background image by using the improved three-frame difference method to obtain a difference image.
4. The method for detecting a moving object according to claim 3, wherein the difference image at least includes a first difference image and a second difference image, and the inter-frame difference of the k frame image, the k +1 frame image and the background image by using the improved three-frame difference method to obtain the difference image comprises:
taking the k frame image as a first frame image, taking the background image as a second frame image and taking the (k + 1) th frame image as a third frame image;
carrying out difference operation on the first frame image and the second frame image to obtain a first difference image;
and carrying out difference operation on the second frame image and the third frame image to obtain a second difference image.
5. The method for detecting a moving object according to claim 4, wherein said filtering the difference image to obtain a binarized image of the moving object comprises:
performing thresholding operation on the first differential image and the second differential image respectively to obtain a binary image corresponding to the first differential image and a binary image corresponding to the second differential image;
performing an and operation on the binary image corresponding to the first differential image and the binary image corresponding to the second differential image to obtain an operated binary image;
and performing morphological operation on the operated binary image to obtain the binary image of the moving target.
6. The method for detecting a moving object according to any one of claims 1 to 5, wherein the thresholding operations on the first difference image and the second difference image respectively to obtain the binarized image corresponding to the first difference image and the binarized image corresponding to the second difference image comprise:
adopting at least one of fixed threshold operation and adaptive threshold operation to eliminate pixel points which do not meet first preset pixels in the first differential image to obtain a binary image corresponding to the first differential image;
and eliminating pixel points which do not meet a second preset pixel in the second differential image by adopting at least one of fixed threshold operation and adaptive threshold operation to obtain a binary image corresponding to the second differential image.
7. The method for detecting a moving object according to claim 6, wherein performing an and operation on the binarized image corresponding to the first difference image and the binarized image corresponding to the second difference image to obtain an operated binarized image comprises:
and performing an and operation on pixel points in the binarized image corresponding to the first differential image and pixel points in the binarized image corresponding to the second differential image to obtain an operated binarized image, wherein the pixel points in the binarized image corresponding to the first differential image and the pixel points in the binarized image corresponding to the second differential image correspond to each other one by one.
8. An apparatus for detecting a moving object, the apparatus comprising:
the acquisition module is used for acquiring a plurality of video frame images of a moving target within preset time;
the background image determining module is used for determining background images corresponding to the plurality of video frame images by using an average background method;
the interframe difference module is used for determining a difference image according to the background image and an improved three-frame difference method, wherein the improved three-frame difference method takes the background image as a second frame in the three-frame difference method to carry out interframe difference;
and the target image determining module is used for filtering the differential image to obtain a binary image of the moving target so as to detect the moving target based on the binary image.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method for detecting a moving object according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for detecting a moving object according to any one of claims 1 to 7.
CN202110244248.2A 2021-03-05 2021-03-05 Moving object detection method and device, terminal equipment and storage medium Pending CN112967321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110244248.2A CN112967321A (en) 2021-03-05 2021-03-05 Moving object detection method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110244248.2A CN112967321A (en) 2021-03-05 2021-03-05 Moving object detection method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112967321A true CN112967321A (en) 2021-06-15

Family

ID=76276786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110244248.2A Pending CN112967321A (en) 2021-03-05 2021-03-05 Moving object detection method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112967321A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627036A (en) * 2022-03-14 2022-06-14 北京有竹居网络技术有限公司 Multimedia resource processing method and device, readable medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997598A (en) * 2017-01-06 2017-08-01 陕西科技大学 The moving target detecting method merged based on RPCA with three-frame difference
JP2018148345A (en) * 2017-03-03 2018-09-20 株式会社デンソーアイティーラボラトリ On-vehicle camera system, adhered matter detecting apparatus, adhered matter removing method, and adhered matter detecting program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997598A (en) * 2017-01-06 2017-08-01 陕西科技大学 The moving target detecting method merged based on RPCA with three-frame difference
JP2018148345A (en) * 2017-03-03 2018-09-20 株式会社デンソーアイティーラボラトリ On-vehicle camera system, adhered matter detecting apparatus, adhered matter removing method, and adhered matter detecting program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627036A (en) * 2022-03-14 2022-06-14 北京有竹居网络技术有限公司 Multimedia resource processing method and device, readable medium and electronic equipment
CN114627036B (en) * 2022-03-14 2023-10-27 北京有竹居网络技术有限公司 Processing method and device of multimedia resources, readable medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110598558A (en) Crowd density estimation method, device, electronic equipment and medium
CN111144337B (en) Fire detection method and device and terminal equipment
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN111783524B (en) Scene change detection method and device, storage medium and terminal equipment
CN112364865B (en) Method for detecting small moving target in complex scene
CN111275036A (en) Target detection method, target detection device, electronic equipment and computer-readable storage medium
CN110991310A (en) Portrait detection method, portrait detection device, electronic equipment and computer readable medium
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN115049954A (en) Target identification method, device, electronic equipment and medium
CN112967321A (en) Moving object detection method and device, terminal equipment and storage medium
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN113542868A (en) Video key frame selection method and device, electronic equipment and storage medium
CN112418089A (en) Gesture recognition method and device and terminal
CN109255311B (en) Image-based information identification method and system
CN110633705A (en) Low-illumination imaging license plate recognition method and device
Zhang et al. Motion detection based on improved Sobel and ViBe algorithm
Sengar Motion segmentation based on structure-texture decomposition and improved three frame differencing
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN116246298A (en) Space occupation people counting method, terminal equipment and storage medium
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium
Lam et al. Real Time Detection of Object Blob Localization Application using 1-D Connected Pixel and Windowing Method on FPGA
Deshpande et al. Use of horizontal and vertical edge processing technique to improve number plate detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210615

RJ01 Rejection of invention patent application after publication