CN110910429A - Moving target detection method and device, storage medium and terminal equipment - Google Patents

Moving target detection method and device, storage medium and terminal equipment Download PDF

Info

Publication number
CN110910429A
CN110910429A CN201911138198.9A CN201911138198A CN110910429A CN 110910429 A CN110910429 A CN 110910429A CN 201911138198 A CN201911138198 A CN 201911138198A CN 110910429 A CN110910429 A CN 110910429A
Authority
CN
China
Prior art keywords
motion
pixel point
pixel
frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911138198.9A
Other languages
Chinese (zh)
Other versions
CN110910429B (en
Inventor
胡艳萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Lianzhou International Technology Co ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN201911138198.9A priority Critical patent/CN110910429B/en
Publication of CN110910429A publication Critical patent/CN110910429A/en
Application granted granted Critical
Publication of CN110910429B publication Critical patent/CN110910429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a moving target detection method, a moving target detection device, a storage medium and terminal equipment, wherein the method comprises the following steps: acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images; calculating and obtaining a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; acquiring a motion candidate region of the m1 frame image according to the pixel motion metric index; acquiring a m2 frame differential image according to the motion candidate area of the m1 frame image; correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area; and performing area statistical correction on the initial motion area to obtain a motion target area. By adopting the technical scheme of the invention, the accuracy of moving target detection can be improved, the false recognition rate is reduced, and the real-time performance is better.

Description

Moving target detection method and device, storage medium and terminal equipment
Technical Field
The present invention relates to the field of moving object detection technologies, and in particular, to a moving object detection method and apparatus, a computer-readable storage medium, and a terminal device.
Background
The moving target detection is to segment a moving area in an image sequence from a relatively static background to obtain a moving foreground target, so that the moving target can be further processed at higher levels such as tracking, classification and identification, and the like.
At present, the mainstream moving object detection method mainly comprises an optical flow method and a background difference method; the general steps of the optical flow method are that the gray scale change and the correlation of adjacent pixels in different time are determined through the change of the pixel speed in an image sequence, so as to detect a moving object; the background difference method is to construct a background model to replace a real background scene, and to compare an image sequence with the background model to identify the difference between a moving target and the background so as to realize the detection of the moving target, wherein typical background models include a Gaussian mixture model, a ViBe and the like.
However, the optical flow method is susceptible to noise, and has poor anti-noise performance, and the background difference method is sensitive to ambient light changes, and is easily interfered by dynamic changes of background scenes (such as leaf swing, lake surface water ripple, weather change), illumination change and cluttered background in the detection process, so that the accuracy of detection of a moving target is low, and the dynamic background is likely to be recognized as the moving target by mistake.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a moving object detection method, apparatus, computer-readable storage medium and terminal device, which can improve the accuracy of moving object detection, reduce the false identification rate, and have better real-time performance.
In order to solve the above technical problem, an embodiment of the present invention provides a moving object detection method, including:
acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m > 1;
calculating and obtaining a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein the content of the first and second substances,the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jY component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
Acquiring a motion candidate region of the m1 frame image according to the pixel motion metric index;
acquiring a m2 frame differential image according to the motion candidate area of the m1 frame image; wherein 0< m2< m 1;
correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area;
and performing area statistical correction on the initial motion area to obtain a motion target area.
Further, the obtaining of the motion candidate region of the m1 frame image according to the pixel motion metric index specifically includes:
for any frame image in the m1 frame images, judging the state of each pixel point according to the pixel motion metric index of each pixel point of the image;
when the value of the pixel motion metric index is larger than 0, judging the corresponding pixel point as a motion point and marking the motion point as 1, otherwise, judging the corresponding pixel point as a background point and marking the background point as 0;
and obtaining a motion candidate region of the image according to the marked motion point and the marked background point.
Further, the acquiring a m2 frame differential image according to the motion candidate region of the m1 frame image specifically includes:
for the l frame differential image of the m2 frame differential images, according to formula Gl=|Dk+l-DlI, calculating to obtain the I frame differential image Gl(ii) a Wherein D isk+lMotion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
Further, the correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring the initial motion region specifically includes:
correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
performing addition calculation on the m2 corrected motion candidate regions to obtain added motion candidate regions;
and carrying out binarization processing on the added motion candidate area to obtain the initial motion area.
Further, the correcting the pixel point in each frame of the differential image according to the U component value and the V component value of each pixel point of each frame of the differential image specifically includes:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a > 0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
Further, the performing area statistical correction on the initial motion area to obtain a motion target area specifically includes:
correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining a corrected initial motion region;
and obtaining the motion target area according to the corrected initial motion area.
Further, the correcting the pixel points in the initial motion region according to the mark value of each pixel point in the initial motion region specifically includes:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of the pixel points marked as 0 and the number n1 of the pixel points marked as 1 in the b x b neighborhood;
when b is not less than β and n is not less than 0, the mark of the pixel point is corrected to be 0;
when b × b β > n0, the mark of the pixel point is corrected to 1.
In order to solve the above technical problem, an embodiment of the present invention further provides a moving object detecting device, including:
the image sequence acquisition module is used for acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m > 1;
the pixel motion metric index acquisition module is used for calculating and obtaining the pixel motion metric index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jY component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
A motion candidate region obtaining module, configured to obtain a motion candidate region of the m1 frame image according to the pixel motion metric indicator;
a difference image obtaining module, configured to obtain a difference image of m2 frames according to the motion candidate region of the m1 frame image; wherein 0< m2< m 1;
the initial motion area acquisition module is used for correcting the m2 frame differential image according to the neighborhood U component value and the neighborhood V component value of each pixel point of each frame differential image and acquiring an initial motion area;
and the moving target area acquisition module is used for carrying out area statistical correction on the initial moving area to acquire a moving target area.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; wherein the computer program, when running, controls the device on which the computer-readable storage medium is located to perform any one of the above-mentioned moving object detection methods.
An embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements any one of the above-described moving object detection methods when executing the computer program.
Compared with the prior art, the embodiment of the invention provides a moving object detection method, a device, a computer-readable storage medium and a terminal device, wherein a pixel motion metric index of each pixel point of an m1 frame image is obtained by calculating a Y component value of each pixel point of each frame image in an image sequence to be processed, a motion candidate area of an m1 frame image is obtained according to the pixel motion metric index of the pixel point, an m2 frame differential image is obtained by performing differential calculation on the motion candidate area of the m1 frame image, an m2 frame differential image is corrected by the U component value and the V component value of each pixel point of each frame differential image to obtain an initial moving area, the initial moving area is subjected to area statistical correction to obtain a moving object area, the accuracy of moving object detection can be improved, the error identification rate is reduced, and the real-time performance is better.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a method for detecting a moving object according to the present invention;
fig. 2 is a block diagram of a preferred embodiment of a moving object detecting apparatus according to the present invention;
fig. 3 is a block diagram of a preferred embodiment of a terminal device provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
An embodiment of the present invention provides a moving object detection method, which is a flowchart of a preferred embodiment of the moving object detection method provided by the present invention, and is shown in fig. 1, where the method includes steps S11 to S16:
step S11, acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m > 1;
step S12, calculating and obtaining a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jY component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
Step S13, obtaining a motion candidate area of the m1 frame image according to the pixel motion metric index;
step S14, acquiring a m2 frame differential image according to the motion candidate area of the m1 frame image; wherein 0< m2< m 1;
step S15, correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area;
and step S16, performing area statistical correction on the initial motion area to acquire a motion target area.
Specifically, the image sequence to be processed (including m frames of images) may be obtained in real time by an electronic device, for example, the image sequence to be processed is obtained in real time by a camera of an electronic device with a video recording function, such as a web camera, a mobile phone, a tablet computer, and the like, which is not limited in the present invention.
After obtaining the image sequence to be processed, in order to obtain a brightness component value corresponding to each frame of image, namely a Y component value of each pixel point, the RGB color space of the m frames of images needs to be converted into the YUV color space, then the Y component value of each pixel point of each frame of image in the m frames of images is correspondingly obtained, and based on the Y component value of each pixel point, a pixel motion metric index MD of each pixel point of the m1 frames of images is calculated and obtained; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jThe Y component value of the jth pixel point of the (k + i) th frame image is represented, k represents the number of the interval frames, preferably, 1 ≦ k ≦ 4, for example, k is 2, and then the pixel motion metric index of the 1 st pixel point of the 1 st frame image is MD1,1=|Y3,1-Y1,1|-(1-α)*Y3,1,Y3,1Y component value, Y, representing the 1 st pixel of the 3 rd frame image1,1And the Y component value of the 1 st pixel point of the 1 st frame image is represented.
According to the calculated value of the pixel motion metric index MD of each pixel point of each frame image in the m1 frame image, the motion state of each corresponding pixel point can be judged, a motion candidate region of the m1 frame image can be correspondingly obtained (each frame image corresponds to one motion candidate region), then, differential calculation is performed according to the obtained motion candidate region of the m1 frame image, an m2 frame differential image can be correspondingly obtained, then, the chroma component value of each pixel point of each frame differential image in the m2 frame differential image, namely the U component value and the V component value, is obtained, the motion state of the pixel point in the m2 frame differential image is corrected based on the U component value and the V component value of each pixel point, the m2 frame corrected differential image is correspondingly obtained, the initial motion region is obtained according to the obtained m2 frame corrected differential image, and further, and performing area statistical correction on the motion state of each pixel point in the obtained initial motion area, thereby obtaining a motion target area according to the corrected initial motion area.
The moving object detection method provided by the embodiment of the invention defines the pixel motion metric index MD of the pixel points according to the brightness component values of the pixel points, judges the motion state of each pixel point according to the value of the MD, correspondingly obtains the motion candidate area, performs differential calculation based on the motion candidate area, correspondingly obtains the differential image, corrects the motion state of the pixel points of the differential image according to the chrominance component values of the pixel points, correspondingly obtains the initial motion area, and corrects the motion state of the pixel points in the initial motion area, thereby obtaining the moving object area.
In another preferred embodiment, the obtaining the motion candidate region of the m1 frame image according to the pixel motion metric index specifically includes:
for any frame image in the m1 frame images, judging the state of each pixel point according to the pixel motion metric index of each pixel point of the image;
when the value of the pixel motion metric index is larger than 0, judging the corresponding pixel point as a motion point and marking the motion point as 1, otherwise, judging the corresponding pixel point as a background point and marking the background point as 0;
and obtaining a motion candidate region of the image according to the marked motion point and the marked background point.
Specifically, the method for acquiring the motion candidate region of each frame image is the same, and here, the following description will be given by taking the motion candidate region of any one of the m1 frame images as an example: with the above embodiment, according to the value of the pixel motion metric index MD of each pixel in the image obtained by calculation, the motion state of each corresponding pixel can be determined, that is, whether a pixel is a motion point is determined, when the value of the pixel motion metric index MD is greater than 0, the pixel corresponding to the pixel motion metric index MD is determined to be a motion point and is marked as 1, when the value of the pixel motion metric index MD is not greater than 0, the pixel corresponding to the pixel motion metric index MD is determined not to be a motion point and is a background point, the pixel is marked as 0, and after the motion states of all pixels in the image are determined and marked, a motion candidate region of the image can be correspondingly obtained (the motion point in the motion candidate region is marked as 1, and the background point is marked as 0).
It should be noted that each motion candidate region D is a pair of binary images D having a size equal to that of the corresponding original image, and the value of the pixel point in the image D can only be 0 or 1, if the value of the pixel point at a certain position in the image D is 1, it indicates that the pixel point at the corresponding position in the original image is a motion point (i.e., a motion state), and if the value of the pixel point at a certain position in the image D is 0, it indicates that the pixel point at the corresponding position in the original image is not a motion point and is a background point (i.e., a static state).
In another preferred embodiment, the acquiring a m2 frame differential image according to the motion candidate region of the m1 frame image specifically includes:
for the l frame differential image of the m2 frame differential images, according to formula Gl=|Dk+l-DlI, calculating to obtain the I frame differential image Gl(ii) a Wherein D isk+lMotion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
Specifically, the method for acquiring the difference image of each frame is the same, and here, the method for acquiring the difference image of the l-th frame is taken as an example for explanation: in conjunction with the above-described embodiment, the motion candidate region D of the m1 frame image has been obtained, for the l frame differential image, according to the formula Gl=|Dk+l-Dl| the motion candidate region D of the k + l frame imagek+lAnd a motion candidate region D of the first frame imagelThe values of the pixel points at the corresponding positions in the image are subtracted, and the absolute value is taken, so that the first frame differential image G is obtainedl
It should be noted that k represents the number of frame intervals, preferably, 1 ≦ k ≦ 4, for example, k is 2, and then the 3 rd frame differential image is obtained according to formula G3=|D5-D3Calculated, | D5Representing motion candidate regions of the 5 th frame image, D3Representing the motion candidate region of the 3 rd frame image.
Similarly, each frame of differential image G is a binary image G having a size equal to that of the corresponding original image, and the value of the pixel point in the image G can only be 0 or 1, if the value of the pixel point at a certain position in the image G is 1, it indicates that the pixel point at the corresponding position in the original image is a motion point (i.e., a motion state), and if the value of the pixel point at a certain position in the image G is 0, it indicates that the pixel point at the corresponding position in the original image is not a motion point, and is a background point (i.e., a static state).
In another preferred embodiment, the correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring the initial motion region specifically includes:
correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
performing addition calculation on the m2 corrected motion candidate regions to obtain added motion candidate regions;
and carrying out binarization processing on the added motion candidate area to obtain the initial motion area.
Specifically, with reference to the foregoing embodiment, after obtaining m2 frame differential images, obtaining a U component value and a V component value of each pixel point of each frame differential image in the m2 frame differential images, respectively correcting a motion state of the pixel point in each corresponding frame differential image based on the U component value and the V component value of each pixel point, correspondingly obtaining m2 corrected motion candidate regions (each frame differential image corresponds to one motion candidate region), performing addition calculation on values of the pixel points at corresponding positions in the obtained m2 corrected motion candidate regions, correspondingly obtaining one added motion candidate region, and performing binarization processing on the value of each pixel point in the added motion candidate region, thereby obtaining an initial motion region.
Similarly, the initial motion region is also a pair of binary images having a size equal to that of the corresponding original image, the value of the pixel point in the initial motion region can only be 0 or 1, if the value of the pixel point at a certain position in the initial motion region is 1, it indicates that the pixel point at the corresponding position in the original image is a motion point (i.e., a motion state), and if the value of the pixel point at a certain position in the initial motion region is 0, it indicates that the pixel point at the corresponding position in the original image is not a motion point and is a background point (i.e., a static state).
As an improvement of the above scheme, the correcting the pixel point in each frame of the differential image according to the U component value and the V component value of each pixel point in each frame of the differential image specifically includes:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a > 0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
Specifically, the correction method of each pixel point of each frame of differential image is the same, and here, the description will be given by taking as an example that any pixel point of any one frame of differential image in the m2 frame of differential image is corrected: with the above embodiment, the mark value of the pixel is obtained, whether the mark of the pixel is 1 is judged, when the mark of the pixel is 1, the pixel is taken as a central pixel, the a neighborhood of the pixel is taken around the pixel, the U component value and the V component value of the pixel marked with 0 contained in the a neighborhood are obtained, the neighborhood U component mean value and the domain V component mean value (i.e. the chroma mean value of the domain background point) of the pixel are respectively calculated and obtained according to the obtained U component value and V component value of all the pixels marked with 0, the euclidean distance of the chroma mean value of the pixel and the domain background point is calculated, the calculated euclidean distance is compared with the preset distance threshold, when the calculated euclidean distance is smaller than the preset distance threshold, the mark of the pixel is corrected to be 0, correspondingly, when the calculated Euclidean distance is not smaller than the preset distance threshold, the mark of the pixel point is not corrected, and the mark value of the pixel point is kept to be 1.
It should be noted that, for each pixel in the differential image, the flag value may be 0 or 1, since this embodiment uses the chrominance component to correct the motion state of the pixel whether the motion state is an interference point caused by the motion of the dynamic background, if the flag of a pixel is 0, it indicates that the pixel is a background point, no correction is performed, that is, only the pixel with the flag value of 1 is corrected,
for example, for a pixel point x, if a is 3, then the pixel point x is taken as a center pixel point to take a 3 × 3 neighborhood, and 3 × 9 pixel points (including a center pixel point) are included in the 3 × 3 neighborhood, then the pixel points marked as 0 in the 3 × 3 neighborhood are counted, if the marking value of 5 pixel points in total among the 9 pixel points is 0, then the U component value and the V component value corresponding to each pixel point are (U1, V1), (U2, V2), (U3, V3), (U4, V4) and (U5, V5), respectively, and the neighborhood U component average value of the pixel point x is calculated to be mu ═ U1+ U2+ U3+ U4+ U5)/5, and the domain V component average value is (V1+ V2+ V3+ V4+ V5)/5.
In another preferred embodiment, the performing area statistical correction on the initial motion area to obtain a motion target area specifically includes:
correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining a corrected initial motion region;
and obtaining the motion target area according to the corrected initial motion area.
Specifically, with reference to the foregoing embodiment, after the initial motion region is obtained according to the difference image, in order to further improve the accuracy of detecting the motion target, the region statistics correction may be performed on the pixel points in the initial motion region according to the obtained motion state of each pixel point in the initial motion region, so as to correspondingly obtain the corrected initial motion region, and thus obtain the motion target region according to the corrected initial motion region.
As an improvement of the above scheme, the correcting the pixel points in the initial motion region according to the label value of each pixel point in the initial motion region specifically includes:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of the pixel points marked as 0 and the number n1 of the pixel points marked as 1 in the b x b neighborhood;
when b is not less than β and n is not less than 0, the mark of the pixel point is corrected to be 0;
when b × b β > n0, the mark of the pixel point is corrected to 1.
Specifically, the correction method of each pixel point in the initial motion region is the same, and here, the correction of any pixel point in the initial motion region is taken as an example, in combination with the above embodiment, the pixel point is taken as a central pixel point, a b × b neighborhood of the pixel point is taken around the pixel point, the number n0 of the pixel point marked as 0 and the number n1 of the pixel point marked as 1 which are contained in the b × b neighborhood are counted, the value of b × β is calculated according to preset percentage β, the value of the b × b β obtained by calculation is compared with the number n0 of the pixel point marked as 0 obtained by counting, when b × b β is not more than n0, the mark of the pixel point is corrected as 0, and when b × β > n0, the mark of the pixel point is corrected as 1.
For example, for a pixel point x, b is 3, β is 80%, the pixel point x is taken as a central pixel point to take a 3 × 3 neighborhood, 3 × 3 neighborhood includes 3 × 3 — 9 pixel points (including the central pixel point), then the number n0 of pixel points marked as 0 and the number n1 of pixel points marked as 1 in the 3 × 3 neighborhood are counted, if the marking value of 5 pixel points in total among the 9 pixel points is 1, and the marking value of the remaining 4 pixel points is 0, then n0 is 4, n1 is 5, and at this time, b β is 3 × 80% — 7.2>4, the marking value of the pixel point x is corrected to 1.
The embodiment of the present invention further provides a moving object detection apparatus, which can implement all the processes of the moving object detection method described in any of the above embodiments, and the functions and implemented technical effects of each module and unit in the apparatus are respectively the same as those of the moving object detection method described in the above embodiment, and are not described herein again.
Referring to fig. 2, it is a block diagram of a preferred embodiment of a moving object detecting apparatus according to the present invention, the apparatus includes:
an image sequence obtaining module 11, configured to obtain an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m > 1;
the pixel motion metric index obtaining module 12 is configured to calculate and obtain a pixel motion metric index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jY component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
A motion candidate region obtaining module 13, configured to obtain a motion candidate region of the m1 frame image according to the pixel motion metric index;
a difference image obtaining module 14, configured to obtain an m2 frame difference image according to the motion candidate region of the m1 frame image; wherein 0< m2< m 1;
an initial motion region acquisition module 15, configured to correct the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquire an initial motion region;
and a moving target area obtaining module 16, configured to perform area statistical correction on the initial moving area to obtain a moving target area.
Preferably, the motion candidate region obtaining module 13 specifically includes:
the pixel state judging unit is used for judging the state of each pixel according to the pixel motion metric index of each pixel of the image for any frame image in the m1 frame images;
the pixel point marking unit is used for judging that the corresponding pixel point is a motion point and marking the pixel point as 1 when the value of the pixel motion measurement index is greater than 0, otherwise, judging that the corresponding pixel point is a background point and marking the pixel point as 0;
and the motion candidate region acquisition unit is used for acquiring the motion candidate region of the image according to the marked motion point and the marked background point.
Preferably, the difference image obtaining module 14 specifically includes:
a differential image obtaining unit for obtaining a first frame differential image of the m2 frame differential images according to formula Gl=|Dk+l-DlI, calculating to obtain the I frame differential image Gl(ii) a Wherein D isk+lMotion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
Preferably, the initial motion region acquiring module 15 specifically includes:
the motion candidate area correcting unit is used for correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
a motion candidate region addition unit configured to perform addition calculation on m2 corrected motion candidate regions to obtain an added motion candidate region;
an initial motion region acquisition unit configured to perform binarization processing on the motion candidate region after the addition to acquire the initial motion region.
Preferably, the motion candidate region correction unit is specifically configured to:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a > 0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
Preferably, the moving target area obtaining module 16 specifically includes:
the initial motion region correction unit is used for correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining the corrected initial motion region;
and the moving target area acquisition unit is used for acquiring the moving target area according to the corrected initial moving area.
Preferably, the initial motion region correction unit is specifically configured to:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of the pixel points marked as 0 and the number n1 of the pixel points marked as 1 in the b x b neighborhood;
when b is not less than β and n is not less than 0, the mark of the pixel point is corrected to be 0;
when b × b β > n0, the mark of the pixel point is corrected to 1.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; wherein, when running, the computer program controls the device on which the computer-readable storage medium is located to execute the moving object detection method according to any of the above embodiments.
An embodiment of the present invention further provides a terminal device, as shown in fig. 3, which is a block diagram of a preferred embodiment of the terminal device provided in the present invention, the terminal device includes a processor 10, a memory 20, and a computer program stored in the memory 20 and configured to be executed by the processor 10, and the processor 10, when executing the computer program, implements the moving object detection method according to any of the embodiments.
Preferably, the computer program may be divided into one or more modules/units (e.g., computer program 1, computer program 2, … …) that are stored in the memory 20 and executed by the processor 10 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., the general purpose Processor may be a microprocessor, or the Processor 10 may be any conventional Processor, the Processor 10 is a control center of the terminal device, and various interfaces and lines are used to connect various parts of the terminal device.
The memory 20 mainly includes a program storage area that may store an operating system, an application program required for at least one function, and the like, and a data storage area that may store related data and the like. In addition, the memory 20 may be a high speed random access memory, may also be a non-volatile memory, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like, or the memory 20 may also be other volatile solid state memory devices.
It should be noted that the terminal device may include, but is not limited to, a processor and a memory, and those skilled in the art will understand that the structural block diagram in fig. 3 is only an example of the terminal device and does not constitute a limitation to the terminal device, and may include more or less components than those shown, or combine some components, or different components.
To sum up, the moving object detection method, the moving object detection device, the computer-readable storage medium and the terminal device provided by the embodiments of the present invention have the following beneficial effects:
(1) the accuracy of moving target detection can be improved, and the false recognition rate is reduced;
(2) the calculation amount is small, the memory consumption is small, the real-time performance is good, and the real-time processing requirement can be met on the embedded equipment;
(3) the U, V channel and area statistics mode is used for correcting the motion area, and the robustness for complex motion scenes (such as dynamic backgrounds of leaf swing, lake surface water ripple and the like) is strong.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A moving object detection method, comprising:
acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m > 1;
according to each of each frame imageCalculating the Y component value of the pixel point to obtain the pixel motion measurement index of each pixel point of the m1 frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jY component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
Acquiring a motion candidate region of the m1 frame image according to the pixel motion metric index;
acquiring a m2 frame differential image according to the motion candidate area of the m1 frame image; wherein 0< m2< m 1;
correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area;
and performing area statistical correction on the initial motion area to obtain a motion target area.
2. The method for detecting a moving object according to claim 1, wherein the obtaining the motion candidate region of the m1 frame image according to the pixel motion metric index specifically includes:
for any frame image in the m1 frame images, judging the state of each pixel point according to the pixel motion metric index of each pixel point of the image;
when the value of the pixel motion metric index is larger than 0, judging the corresponding pixel point as a motion point and marking the motion point as 1, otherwise, judging the corresponding pixel point as a background point and marking the background point as 0;
and obtaining a motion candidate region of the image according to the marked motion point and the marked background point.
3. The method according to claim 1, wherein the obtaining of the m2 frame differential image according to the motion candidate region of the m1 frame image specifically comprises:
for the l frame differential image of the m2 frame differential images, according to formula Gl=|Dk+l-DlI, calculating to obtain the I frame differential image Gl(ii) a Wherein D isk+lMotion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
4. The method according to claim 2, wherein the step of correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image and obtaining the initial motion region specifically comprises:
correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
performing addition calculation on the m2 corrected motion candidate regions to obtain added motion candidate regions;
and carrying out binarization processing on the added motion candidate area to obtain the initial motion area.
5. The method according to claim 4, wherein the correcting the pixel points in each frame of the differential image according to the U component value and the V component value of each pixel point in each frame of the differential image comprises:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a > 0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
6. The method for detecting a moving object according to claim 2, wherein the performing area statistical correction on the initial moving area to obtain the moving object area specifically comprises:
correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining a corrected initial motion region;
and obtaining the motion target area according to the corrected initial motion area.
7. The method according to claim 6, wherein the correcting the pixel points in the initial motion region according to the label value of each pixel point in the initial motion region specifically comprises:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of the pixel points marked as 0 and the number n1 of the pixel points marked as 1 in the b x b neighborhood;
when b is not less than β and n is not less than 0, the mark of the pixel point is corrected to be 0;
when b × b β > n0, the mark of the pixel point is corrected to 1.
8. A moving object detecting apparatus, comprising:
the image sequence acquisition module is used for acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m > 1;
the pixel motion metric index acquisition module is used for calculating and obtaining the pixel motion metric index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MDi,j=|Yk+i,j-Yi,j|-(1-α)*Yk+i,j,Yk+i,jY component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
A motion candidate region obtaining module, configured to obtain a motion candidate region of the m1 frame image according to the pixel motion metric indicator;
a difference image obtaining module, configured to obtain a difference image of m2 frames according to the motion candidate region of the m1 frame image; wherein 0< m2< m 1;
the initial motion area acquisition module is used for correcting the m2 frame differential image according to the neighborhood U component value and the neighborhood V component value of each pixel point of each frame differential image and acquiring an initial motion area;
and the moving target area acquisition module is used for carrying out area statistical correction on the initial moving area to acquire a moving target area.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the moving object detection method according to any one of claims 1 to 7.
10. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the moving object detection method according to any one of claims 1 to 7 when executing the computer program.
CN201911138198.9A 2019-11-19 2019-11-19 Moving target detection method and device, storage medium and terminal equipment Active CN110910429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911138198.9A CN110910429B (en) 2019-11-19 2019-11-19 Moving target detection method and device, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911138198.9A CN110910429B (en) 2019-11-19 2019-11-19 Moving target detection method and device, storage medium and terminal equipment

Publications (2)

Publication Number Publication Date
CN110910429A true CN110910429A (en) 2020-03-24
CN110910429B CN110910429B (en) 2023-03-17

Family

ID=69818164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911138198.9A Active CN110910429B (en) 2019-11-19 2019-11-19 Moving target detection method and device, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN110910429B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935454A (en) * 2020-07-27 2020-11-13 衡阳市大井医疗器械科技有限公司 Traffic-saving image stream transmission method and electronic equipment
CN112164058A (en) * 2020-10-13 2021-01-01 东莞市瑞图新智科技有限公司 Silk-screen area coarse positioning method and device for optical filter and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1822086A (en) * 2005-02-16 2006-08-23 日本电气株式会社 Image processing method, display device and its driving method
US20090245575A1 (en) * 2008-03-25 2009-10-01 Fujifilm Corporation Method, apparatus, and program storage medium for detecting object
JP2009251892A (en) * 2008-04-04 2009-10-29 Fujifilm Corp Object detection method, object detection device, and object detection program
CN102184553A (en) * 2011-05-24 2011-09-14 杭州华三通信技术有限公司 Moving shadow detecting method and device
CN102509311A (en) * 2011-11-21 2012-06-20 华亚微电子(上海)有限公司 Motion detection method and device
CN102946504A (en) * 2012-11-22 2013-02-27 四川虹微技术有限公司 Self-adaptive moving detection method based on edge detection
CN103164847A (en) * 2013-04-03 2013-06-19 上海理工大学 Method for eliminating shadow of moving target in video image
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1822086A (en) * 2005-02-16 2006-08-23 日本电气株式会社 Image processing method, display device and its driving method
US20090245575A1 (en) * 2008-03-25 2009-10-01 Fujifilm Corporation Method, apparatus, and program storage medium for detecting object
JP2009251892A (en) * 2008-04-04 2009-10-29 Fujifilm Corp Object detection method, object detection device, and object detection program
CN102184553A (en) * 2011-05-24 2011-09-14 杭州华三通信技术有限公司 Moving shadow detecting method and device
CN102509311A (en) * 2011-11-21 2012-06-20 华亚微电子(上海)有限公司 Motion detection method and device
CN102946504A (en) * 2012-11-22 2013-02-27 四川虹微技术有限公司 Self-adaptive moving detection method based on edge detection
CN103164847A (en) * 2013-04-03 2013-06-19 上海理工大学 Method for eliminating shadow of moving target in video image
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZEYAD QASIM HABEEB AL-ZAYDI: "VIDEO AND IMAGE PROCESSING BASED TECHNIQUES FOR PEOPLE DETECTION AND COUNTING IN CROWDED ENVIRONMENTS", 《RESEARCH PORTAL》 *
彭爽 等: "面向高清视频监控系统的实时运动检测算法", 《计算机工程》 *
范传阳 等: "基于SOPC的运动目标图像检测系统设计", 《信号与系统》 *
闫飞: "基于SOPC的运动目标检测系统设计", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935454A (en) * 2020-07-27 2020-11-13 衡阳市大井医疗器械科技有限公司 Traffic-saving image stream transmission method and electronic equipment
CN111935454B (en) * 2020-07-27 2022-08-26 衡阳市大井医疗器械科技有限公司 Traffic-saving image stream transmission method and electronic equipment
CN112164058A (en) * 2020-10-13 2021-01-01 东莞市瑞图新智科技有限公司 Silk-screen area coarse positioning method and device for optical filter and storage medium

Also Published As

Publication number Publication date
CN110910429B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN110334635B (en) Subject tracking method, apparatus, electronic device and computer-readable storage medium
US8600105B2 (en) Combining multiple cues in a visual object detection system
CN111179302B (en) Moving target detection method and device, storage medium and terminal equipment
US8306262B2 (en) Face tracking method for electronic camera device
JP6688277B2 (en) Program, learning processing method, learning model, data structure, learning device, and object recognition device
CN111325769B (en) Target object detection method and device
US11538175B2 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN111325716A (en) Screen scratch fragmentation detection method and equipment
US20130278788A1 (en) Method for determining the extent of a foreground object in an image
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
CN111723644A (en) Method and system for detecting occlusion of surveillance video
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
US11107237B2 (en) Image foreground detection apparatus and method and electronic device
CN110910429B (en) Moving target detection method and device, storage medium and terminal equipment
US20220270266A1 (en) Foreground image acquisition method, foreground image acquisition apparatus, and electronic device
JP2020024675A (en) Method, device and system for determining whether pixel position of image frame belongs to background or foreground
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112581481B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111160340B (en) Moving object detection method and device, storage medium and terminal equipment
CN113409353A (en) Motion foreground detection method and device, terminal equipment and storage medium
CN111539975B (en) Method, device, equipment and storage medium for detecting moving object
CN109255797B (en) Image processing device and method, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220822

Address after: Floor 12-17, unit 1, building 2, No. 466, Xinyu Road, high tech Zone, Chengdu, Sichuan 610000

Applicant after: Chengdu Lianzhou International Technology Co.,Ltd.

Address before: 518000 the 1st and 3rd floors of the south section of building 24 and the 1st-4th floor of the north section of building 28, Shennan Road Science and Technology Park, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: TP-LINK TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant