CN110910429B - Moving target detection method and device, storage medium and terminal equipment - Google Patents
Moving target detection method and device, storage medium and terminal equipment Download PDFInfo
- Publication number
- CN110910429B CN110910429B CN201911138198.9A CN201911138198A CN110910429B CN 110910429 B CN110910429 B CN 110910429B CN 201911138198 A CN201911138198 A CN 201911138198A CN 110910429 B CN110910429 B CN 110910429B
- Authority
- CN
- China
- Prior art keywords
- motion
- pixel point
- pixel
- frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a moving target detection method, a moving target detection device, a storage medium and terminal equipment, wherein the method comprises the following steps: acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images; calculating and obtaining a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; obtaining a motion candidate area of the m1 frame image according to the pixel motion metric index; acquiring an m2 frame differential image according to the motion candidate area of the m1 frame image; correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area; and performing area statistical correction on the initial motion area to obtain a motion target area. By adopting the technical scheme of the invention, the accuracy of moving target detection can be improved, the false recognition rate is reduced, and the real-time performance is better.
Description
Technical Field
The present invention relates to the field of moving object detection technologies, and in particular, to a moving object detection method and apparatus, a computer-readable storage medium, and a terminal device.
Background
The moving target detection is to segment a moving area in an image sequence from a relatively static background to obtain a moving foreground target, so that the moving target can be further processed at higher levels such as tracking, classification and identification, and the like.
At present, the mainstream moving object detection method mainly comprises an optical flow method and a background difference method; the general steps of the optical flow method are to determine the gray scale change and the correlation of adjacent pixels in different time through the change of pixel speed in an image sequence, so as to detect a moving target; the background difference method is to construct a background model to replace a real background scene, and to compare an image sequence with the background model to identify the difference between a moving target and the background so as to realize the detection of the moving target, wherein typical background models include a Gaussian mixture model, a ViBe and the like.
However, the optical flow method is susceptible to noise, and has poor anti-noise performance, and the background difference method is sensitive to ambient light changes, and is easily interfered by dynamic changes of background scenes (such as leaf swing, lake surface water ripple, weather change), illumination change and cluttered background in the detection process, so that the accuracy of detecting a moving target is low, and the dynamic background is likely to be recognized as the moving target by mistake.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a moving object detection method, apparatus, computer-readable storage medium and terminal device, which can improve the accuracy of moving object detection, reduce the false identification rate, and have better real-time performance.
In order to solve the above technical problem, an embodiment of the present invention provides a moving object detection method, including:
acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m >1;
calculating and obtaining a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Y component value, k, of j pixel point representing k + i frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
Obtaining a motion candidate area of the m1 frame image according to the pixel motion metric index;
acquiring an m2 frame differential image according to the motion candidate area of the m1 frame image; wherein 0 and m2 are-m1;
correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area;
and performing area statistical correction on the initial motion area to obtain a motion target area.
Further, the obtaining of the motion candidate region of the m1 frame image according to the pixel motion metric index specifically includes:
for any frame image in m1 frame images, judging the state of each pixel point according to the pixel motion measurement index of each pixel point of the image;
when the value of the pixel motion metric index is larger than 0, judging the corresponding pixel point as a motion point and marking the motion point as 1, otherwise, judging the corresponding pixel point as a background point and marking the background point as 0;
and obtaining a motion candidate region of the image according to the marked motion point and the marked background point.
Further, the acquiring an m2 frame differential image according to the motion candidate region of the m1 frame image specifically includes:
for the l frame differential image in the m2 frame differential image, according to formula G l =|D k+l -D l I, calculating to obtain the I frame differential image G l (ii) a Wherein D is k+l Motion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
Further, the correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring the initial motion region specifically includes:
correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
performing addition calculation on the m2 corrected motion candidate areas to obtain an added motion candidate area;
and carrying out binarization processing on the added motion candidate area to obtain the initial motion area.
Further, the correcting the pixel point in each frame of the differential image according to the U component value and the V component value of each pixel point in each frame of the differential image specifically includes:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a >0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
Further, performing area statistical correction on the initial motion area to obtain a motion target area specifically includes:
correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining a corrected initial motion region;
and obtaining the motion target area according to the corrected initial motion area.
Further, the correcting the pixel points in the initial motion region according to the mark value of each pixel point in the initial motion region specifically includes:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of pixel points marked as 0 and the number n1 of pixel points marked as 1 in the b × b neighborhood;
when b is not more than n0, the mark of the pixel point is corrected to be 0;
when b β > n0, correcting the mark of the pixel point to 1; wherein β is a preset percentage.
In order to solve the above technical problem, an embodiment of the present invention further provides a moving object detecting device, including:
the image sequence acquisition module is used for acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m >1;
the pixel motion metric index acquisition module is used for calculating and obtaining the pixel motion metric index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Y component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
A motion candidate region obtaining module, configured to obtain a motion candidate region of the m1 frame image according to the pixel motion metric indicator;
a difference image obtaining module, configured to obtain m2 frame difference images according to the motion candidate area of the m1 frame image; wherein 0-m2-m1;
the initial motion area acquisition module is used for correcting the m2 frame differential image according to the neighborhood U component value and the neighborhood V component value of each pixel point of each frame of differential image and acquiring an initial motion area;
and the moving target area acquisition module is used for carrying out area statistical correction on the initial moving area to acquire a moving target area.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; wherein the computer program, when running, controls the device on which the computer-readable storage medium is located to perform any one of the above-mentioned moving object detection methods.
An embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements any one of the above-described moving object detection methods when executing the computer program.
Compared with the prior art, the embodiment of the invention provides a moving object detection method, a moving object detection device, a computer-readable storage medium and a terminal device, wherein a pixel motion metric index of each pixel point of an m1 frame image is obtained through calculation according to a Y component value of each pixel point of each frame image in an image sequence to be processed, a motion candidate area of the m1 frame image is obtained according to the pixel motion metric index of the pixel point, an m2 frame differential image is obtained through differential calculation of the motion candidate area of the m1 frame image, an m2 frame differential image is obtained through correction of a U component value and a V component value of each pixel point of each frame differential image, an initial moving area is obtained, and a moving object area is obtained through area statistical correction of the initial moving area, so that the moving object detection accuracy can be improved, the false identification rate is reduced, and the real-time performance is better.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a method for detecting a moving object according to the present invention;
fig. 2 is a block diagram of a preferred embodiment of a moving object detecting apparatus according to the present invention;
fig. 3 is a block diagram of a preferred embodiment of a terminal device provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a moving object detection method, which is a flowchart of a preferred embodiment of the moving object detection method provided by the present invention, as shown in fig. 1, and the method includes steps S11 to S16:
s11, acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m >1;
s12, calculating to obtain a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Y component value, k, of j pixel point representing k + i frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
S13, acquiring a motion candidate area of the m1 frame image according to the pixel motion metric index;
s14, acquiring an m2 frame differential image according to the motion candidate area of the m1 frame image; wherein 0-m2-m1;
s15, correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area;
and S16, performing area statistical correction on the initial motion area to obtain a motion target area.
Specifically, the image sequence to be processed (including m frames of images) may be obtained in real time by an electronic device, for example, the image sequence to be processed is obtained in real time by a camera of an electronic device with a video recording function, such as a web camera, a mobile phone, a tablet computer, and the like, which is not limited in the present invention.
After obtaining the sequence of images to be processed, in order to obtain the corresponding luminance component value for each frame of image, i.e. eachConverting the RGB color space of the m frames of images into the YUV color space, correspondingly acquiring the Y component value of each pixel point of each frame of image in the m frames of images, and calculating and acquiring the pixel motion metric index MD of each pixel point of the m1 frames of images based on the Y component value of each pixel point; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Representing the Y component value of the jth pixel point of the k + i frame image, k representing the number of the interval frames, preferably, 1 ≦ k ≦ 4, for example, taking k =2, the pixel motion metric index of the 1 st pixel point of the 1 st frame image is MD 1,1 =|Y 3,1 -Y 1,1 |-(1-α)*Y 3,1 ,Y 3,1 Y component value, Y, representing the 1 st pixel of the 3 rd frame image 1,1 And the Y component value of the 1 st pixel point of the 1 st frame image is represented.
According to the calculated value of the pixel motion metric index MD of each pixel point of each frame image in the m1 frame image, the motion state of each corresponding pixel point can be judged, a motion candidate area of the m1 frame image can be correspondingly obtained (each frame image corresponds to one motion candidate area), then difference calculation is carried out according to the obtained motion candidate area of the m1 frame image, an m2 frame differential image can be correspondingly obtained, then, the chromaticity component value of each pixel point of each frame differential image in the m2 frame differential image, namely the U component value and the V component value, is obtained, the motion state of the pixel points in the m2 frame differential image is corrected based on the U component value and the V component value of each pixel point, the m2 frame corrected differential image is correspondingly obtained, an initial motion area is obtained according to the obtained m2 frame corrected differential image, furthermore, area statistic correction is carried out on the motion state of each pixel point in the obtained initial motion area, and a motion target area is obtained according to the corrected initial motion area.
The moving object detection method provided by the embodiment of the invention defines the pixel motion metric index MD of the pixel points according to the brightness component values of the pixel points, judges the motion state of each pixel point according to the value of the MD, correspondingly obtains the motion candidate area, performs differential calculation based on the motion candidate area, correspondingly obtains the differential image, corrects the motion state of the pixel points of the differential image according to the chrominance component values of the pixel points, correspondingly obtains the initial motion area, and corrects the motion state of the pixel points in the initial motion area, thereby obtaining the moving object area.
In another preferred embodiment, the obtaining the motion candidate region of the m1 frame image according to the pixel motion metric index specifically includes:
for any frame image in m1 frame images, judging the state of each pixel point according to the pixel motion measurement index of each pixel point of the image;
when the value of the pixel motion metric index is larger than 0, judging the corresponding pixel point as a motion point and marking the motion point as 1, otherwise, judging the corresponding pixel point as a background point and marking the background point as 0;
and obtaining a motion candidate region of the image according to the marked motion point and the marked background point.
Specifically, the method for acquiring the motion candidate region of each frame image is the same, and here, the following description will be given by taking the motion candidate region of any one frame image in m1 frame images as an example: in combination with the above embodiment, according to the value of the pixel motion metric index MD of each pixel point in the image obtained by calculation, the motion state of each corresponding pixel point can be determined, that is, whether the pixel point is a motion point is determined, when the value of the pixel motion metric index MD is greater than 0, the pixel point corresponding to the pixel motion metric index MD is determined to be a motion point and is marked as 1, when the value of the pixel motion metric index MD is not greater than 0, the pixel point corresponding to the pixel motion metric index MD is determined not to be a motion point and is a background point, and is marked as 0, and after the motion states of all the pixel points in the image are determined and marked, the motion candidate region of the image can be correspondingly obtained (the motion point in the motion candidate region is marked as 1, and the background point is marked as 0).
It should be noted that each motion candidate region D is a pair of binary images D having a size equal to that of the corresponding original image, and the value of the pixel point in the image D can only be 0 or 1, if the value of the pixel point at a certain position in the image D is 1, it indicates that the pixel point at the corresponding position in the original image is a motion point (i.e., a motion state), and if the value of the pixel point at a certain position in the image D is 0, it indicates that the pixel point at the corresponding position in the original image is not a motion point and is a background point (i.e., a static state).
In another preferred embodiment, the acquiring the m2 frame differential image according to the motion candidate region of the m1 frame image specifically includes:
for the l frame differential image in the m2 frame differential image, according to formula G l =|D k+l -D l I, calculating to obtain the I frame differential image G l (ii) a Wherein D is k+l Motion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
Specifically, the method for acquiring the difference image of each frame is the same, and here, the method for acquiring the difference image of the l-th frame is taken as an example for explanation: in conjunction with the above-described embodiment, the motion candidate region D of the m1 frame image has been obtained, for the l-th frame differential image, according to the formula G l =|D k+l -D l | motion candidate region D of k + l frame image k+l And a motion candidate region D of the first frame image l The values of the pixel points at the corresponding positions in the image are subtracted, and the absolute value is taken, so that the first frame differential image G is obtained l 。
It should be noted that k represents the number of frame intervals, preferably 1 ≦ k ≦ 4, for example, k =2, and then the 3 rd frame differential image is determined according to formula G 3 =|D 5 -D 3 Calculated, | D 5 Representing motion candidate regions of the 5 th frame image, D 3 Representing the motion candidate region of the 3 rd frame image.
Similarly, each frame of differential image G is a pair of binary images G having a size equal to that of the corresponding original image, and the value of the pixel point in the image G can only be 0 or 1, if the value of the pixel point at a certain position in the image G is 1, it indicates that the pixel point at the corresponding position in the original image is a motion point (i.e., a motion state), and if the value of the pixel point at a certain position in the image G is 0, it indicates that the pixel point at the corresponding position in the original image is not a motion point, and is a background point (i.e., a static state).
In another preferred embodiment, the correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring the initial motion region specifically includes:
correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
performing addition calculation on the m2 corrected motion candidate areas to obtain an added motion candidate area;
and carrying out binarization processing on the added motion candidate area to obtain the initial motion area.
Specifically, with reference to the foregoing embodiment, after obtaining m2 frames of differential images, obtaining a U component value and a V component value of each pixel point of each frame of differential image in the m2 frames of differential images, respectively correcting a motion state of the pixel point in each corresponding frame of differential image based on the U component value and the V component value of each pixel point, correspondingly obtaining m2 corrected motion candidate regions (each frame of differential image corresponds to one motion candidate region), performing addition calculation on values of the pixel points at corresponding positions in the obtained m2 corrected motion candidate regions, correspondingly obtaining one added motion candidate region, and performing binarization processing on the value of each pixel point in the added motion candidate region, thereby obtaining an initial motion region.
Similarly, the initial motion region is also a pair of binary images having a size equal to that of the corresponding original image, the value of the pixel point in the initial motion region can only be 0 or 1, if the value of the pixel point at a certain position in the initial motion region is 1, it indicates that the pixel point at the corresponding position in the original image is a motion point (i.e., a motion state), and if the value of the pixel point at a certain position in the initial motion region is 0, it indicates that the pixel point at the corresponding position in the original image is not a motion point and is a background point (i.e., a static state).
As an improvement of the above scheme, the correcting the pixel point in each frame of the differential image according to the U component value and the V component value of each pixel point in each frame of the differential image specifically includes:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a >0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
Specifically, the correction method of each pixel point of each frame of differential image is the same, and here, the description will be given by taking as an example that any pixel point of any one frame of differential image in the m2 frame of differential image is corrected: with the embodiment, the mark value of the pixel is obtained, whether the mark of the pixel is 1 is judged, when the mark of the pixel is 1, the pixel is taken as a central pixel, an a neighborhood of the pixel is taken around the pixel, a U component value and a V component value of the pixel marked with 0 are obtained, a neighborhood U component mean value and a field V component mean value (namely a chromaticity mean value of field background points) of the pixel are respectively obtained according to the obtained U component value and V component value of all the pixels marked with 0, the Euclidean distance of the chromaticity mean value of the pixel and the field background points is calculated, the Euclidean distance obtained by calculation is compared with a preset distance threshold, when the Euclidean distance obtained by calculation is smaller than the preset distance threshold, the mark of the pixel is corrected to 0, correspondingly, when the Euclidean distance obtained by calculation is not smaller than the preset distance threshold, the mark of the pixel is not corrected, and the mark value of the pixel is kept to be 1.
It should be noted that, for each pixel in the differential image, the flag value may be 0 or 1, since this embodiment uses the chrominance component to correct the motion state of the pixel whether the motion state is an interference point caused by the motion of the dynamic background, if the flag of a pixel is 0, it indicates that the pixel is a background point, no correction is performed, that is, only the pixel with the flag value of 1 is corrected,
for example, for a pixel point x, a =3 is taken, a 3*3 neighborhood is taken with the pixel point x as a center pixel point, 3 × 3=9 pixel points (including the center pixel point) are included in 3*3 neighborhood, then, a pixel point marked as 0 in 3*3 neighborhood is counted, if a mark value of 5 pixel points in total among the 9 pixel points is 0, a U component value and a V component value corresponding to each pixel point are respectively (U1, V1), (U2, V2), (U3, V3), (U4, V4) and (U5, V5), a neighborhood U component mean value of the pixel point x is calculated to be mu = (U1 + U2+ U3+ U4+ U5)/5, and a domain V component mean value is mv = (V1 + V2+ V3+ V4+ V5)/5.
In another preferred embodiment, the performing area statistical correction on the initial motion area to obtain a motion target area specifically includes:
correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining a corrected initial motion region;
and obtaining the motion target area according to the corrected initial motion area.
Specifically, with reference to the foregoing embodiment, after the initial motion region is obtained according to the difference image, in order to further improve the accuracy of detecting the moving target, the region statistical correction may be performed on the pixel points in the initial motion region according to the motion state of each pixel point in the obtained initial motion region, and the corrected initial motion region is correspondingly obtained, so that the moving target region is obtained according to the corrected initial motion region.
As an improvement of the above scheme, the correcting the pixel points in the initial motion region according to the label value of each pixel point in the initial motion region specifically includes:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of pixel points marked as 0 and the number n1 of pixel points marked as 1 in the b x b neighborhood;
when b is not more than n0, the mark of the pixel point is corrected to be 0;
when b β > n0, correcting the mark of the pixel point to 1; wherein β is a preset percentage.
Specifically, the correction method of each pixel point in the initial motion region is the same, and here, the description will be given by taking the example of correcting any one pixel point in the initial motion region: combining the above embodiment, taking the pixel as a central pixel, taking a b × b neighborhood of the pixel around the pixel, counting the number n0 of pixels marked with 0 and the number n1 of pixels marked with 1 contained in the b × b neighborhood, calculating the value of b × β according to a preset percentage β, comparing the calculated value of b × β with the counted value of n0 of the number of pixels marked with 0 in the b × b neighborhood, correcting the mark of the pixel to be 0 when b × β is not more than n0, and correcting the mark of the pixel to be 1 when b × β > n 0.
For example, for a pixel point x, b =3, β =80%, the pixel point x is used as a center pixel point to take 3*3 neighborhood, 3 × 3=9 pixel points (including the center pixel point) are included in 3*3 neighborhood, then the number n0 of the pixel points marked as 0 and the number n1 of the pixel points marked as 1 in 3*3 neighborhood are counted, if the marking value of 5 pixel points in total in the 9 pixel points is 1, and the marking value of the remaining 4 pixel points is 0, n0=4, n1=5, at this time, b × β =3 × 80 = 7.2% >4, the marking value of the pixel point x is corrected to 1.
The embodiment of the present invention further provides a moving object detection apparatus, which can implement all the processes of the moving object detection method described in any of the above embodiments, and the functions and implemented technical effects of each module and unit in the apparatus are respectively the same as those of the moving object detection method described in the above embodiment, and are not described herein again.
Referring to fig. 2, it is a block diagram of a preferred embodiment of a moving object detecting apparatus according to the present invention, the apparatus includes:
an image sequence obtaining module 11, configured to obtain an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m >1;
the pixel motion metric index acquisition module 12 is configured to calculate a pixel motion metric index of each pixel point of the m1 frame image according to a Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Y component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
A motion candidate region obtaining module 13, configured to obtain a motion candidate region of the m1 frame image according to the pixel motion metric index;
a difference image obtaining module 14, configured to obtain an m2 frame difference image according to the motion candidate region of the m1 frame image; wherein 0-m2-m1;
an initial motion region acquisition module 15, configured to correct the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquire an initial motion region;
and a moving target area obtaining module 16, configured to perform area statistical correction on the initial moving area to obtain a moving target area.
Preferably, the motion candidate region obtaining module 13 specifically includes:
the pixel state judging unit is used for judging the state of each pixel according to the pixel motion metric index of each pixel of the image for any frame of image in the m1 frame of image;
the pixel point marking unit is used for judging that the corresponding pixel point is a motion point and marking the pixel point as 1 when the value of the pixel motion measurement index is greater than 0, otherwise, judging that the corresponding pixel point is a background point and marking the pixel point as 0;
and the motion candidate region acquisition unit is used for acquiring the motion candidate region of the image according to the marked motion point and the marked background point.
Preferably, the difference image obtaining module 14 specifically includes:
a differential image obtaining unit for obtaining the I < th > frame differential image in the m < 2> frame differential image according to formula G l =|D k+l -D l I, calculating to obtain the I frame differential image G l (ii) a Wherein D is k+l Motion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
Preferably, the initial motion region acquiring module 15 specifically includes:
the motion candidate area correction unit is used for correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
a motion candidate region adding unit configured to perform addition calculation on m2 corrected motion candidate regions to obtain an added motion candidate region;
an initial motion region acquisition unit configured to perform binarization processing on the motion candidate region after the addition to acquire the initial motion region.
Preferably, the motion candidate region correction unit is specifically configured to:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a >0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
Preferably, the moving target area obtaining module 16 specifically includes:
the initial motion region correction unit is used for correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining the corrected initial motion region;
and the moving target area acquisition unit is used for acquiring the moving target area according to the corrected initial moving area.
Preferably, the initial motion region correction unit is specifically configured to:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of pixel points marked as 0 and the number n1 of pixel points marked as 1 in the b x b neighborhood;
when b is not more than n0, the mark of the pixel point is corrected to be 0;
when b β > n0, correcting the mark of the pixel point to 1; wherein β is a preset percentage.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; wherein, when running, the computer program controls the device on which the computer readable storage medium is located to execute the moving object detection method according to any one of the above embodiments.
An embodiment of the present invention further provides a terminal device, as shown in fig. 3, which is a block diagram of a preferred embodiment of the terminal device provided in the present invention, the terminal device includes a processor 10, a memory 20, and a computer program stored in the memory 20 and configured to be executed by the processor 10, and the processor 10, when executing the computer program, implements the moving object detection method according to any of the embodiments.
Preferably, the computer program can be divided into one or more modules/units (e.g. computer program 1, computer program 2,) which are stored in the memory 20 and executed by the processor 10 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, etc., the general purpose Processor may be a microprocessor, or the Processor 10 may be any conventional Processor, the Processor 10 is a control center of the terminal device, and various interfaces and lines are used to connect various parts of the terminal device.
The memory 20 mainly includes a program storage area that may store an operating system, an application program required for at least one function, and the like, and a data storage area that may store related data and the like. In addition, the memory 20 may be a high speed random access memory, may also be a non-volatile memory, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like, or the memory 20 may also be other volatile solid state memory devices.
It should be noted that the terminal device may include, but is not limited to, a processor and a memory, and those skilled in the art will understand that the structural block diagram in fig. 3 is only an example of the terminal device and does not constitute a limitation to the terminal device, and may include more or less components than those shown, or combine some components, or different components.
To sum up, the moving object detection method, the moving object detection device, the computer-readable storage medium and the terminal device provided by the embodiments of the present invention have the following beneficial effects:
(1) The accuracy of moving target detection can be improved, and the false recognition rate is reduced;
(2) The calculation amount is small, the memory consumption is small, the real-time performance is good, and the real-time processing requirement can be met on the embedded equipment;
(3) The U, V channel and area statistics mode is used for correcting the motion area, and the method has strong robustness for complex motion scenes (such as dynamic backgrounds of leaf swing, lake surface water ripple and the like).
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A moving object detection method, comprising:
acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m >1;
calculating and obtaining a pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Y component value, k, of j pixel point representing k + i frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
Obtaining a motion candidate area of the m1 frame image according to the pixel motion metric index;
acquiring an m2 frame differential image according to the motion candidate area of the m1 frame image; wherein 0-m2-m1;
correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image, and acquiring an initial motion area;
and performing area statistical correction on the initial motion area to obtain a motion target area.
2. The method for detecting a moving object according to claim 1, wherein the obtaining the motion candidate region of the m1 frame image according to the pixel motion metric index specifically includes:
for any frame image in m1 frames of images, judging the state of each pixel point according to the pixel motion measurement index of each pixel point of the image;
when the value of the pixel motion metric index is larger than 0, judging the corresponding pixel point as a motion point and marking the motion point as 1, otherwise, judging the corresponding pixel point as a background point and marking the background point as 0;
and obtaining a motion candidate region of the image according to the marked motion point and the marked background point.
3. The method for detecting a moving object according to claim 1, wherein the obtaining of the m2 frame differential image according to the motion candidate region of the m1 frame image specifically comprises:
for the l frame differential image in the m2 frame differential image, according to formula G l =|D k+l -D l I, calculating to obtain the I frame differential image G l (ii) a Wherein D is k+l Motion candidate region representing the k + l frame image, 0<l≤m2,l<k+l≤m1。
4. The method according to claim 2, wherein the correcting the m2 frame differential image according to the U component value and the V component value of each pixel point of each frame differential image and obtaining the initial motion region specifically comprises:
correcting the pixel points in each frame of differential image according to the U component value and the V component value of each pixel point of each frame of differential image, and correspondingly obtaining m2 corrected motion candidate areas;
performing addition calculation on the m2 corrected motion candidate areas to obtain an added motion candidate area;
and carrying out binarization processing on the added motion candidate area to obtain the initial motion area.
5. The method according to claim 4, wherein the correcting the pixel points in each frame of the differential image according to the U component value and the V component value of each pixel point in each frame of the differential image comprises:
for any pixel point of any frame differential image, judging whether the mark of the pixel point is 1;
when the mark of the pixel point is 1, acquiring an a-a neighborhood of the pixel point; wherein the a-a neighborhood takes the pixel point as a central pixel point, and a >0;
acquiring U component values and V component values of pixel points marked as 0 contained in the a-a neighborhood;
calculating to obtain neighborhood U component and V component mean values of the pixel points according to the U component values and the V component values;
calculating Euclidean distance between the pixel point and the mean value;
and when the Euclidean distance is smaller than a preset distance threshold value, correcting the mark of the pixel point to be 0.
6. The method for detecting a moving object according to claim 2, wherein the performing area statistical correction on the initial moving area to obtain a moving object area specifically comprises:
correcting the pixel points in the initial motion region according to the marking value of each pixel point in the initial motion region, and correspondingly obtaining a corrected initial motion region;
and obtaining the motion target area according to the corrected initial motion area.
7. The method according to claim 6, wherein the correcting the pixel points in the initial motion region according to the label value of each pixel point in the initial motion region specifically comprises:
for any pixel point in the initial motion region, acquiring a b & ltb & gt neighborhood of the pixel point; wherein, the b-b neighborhood takes the pixel point as a central pixel point, and b is greater than 0;
counting the number n0 of pixel points marked as 0 and the number n1 of pixel points marked as 1 in the b x b neighborhood;
when b is not more than n0, the mark of the pixel point is corrected to be 0;
when b β > n0, correcting the mark of the pixel point to 1; wherein β is a preset percentage.
8. A moving object detecting apparatus, comprising:
the image sequence acquisition module is used for acquiring an image sequence to be processed; wherein the image sequence to be processed comprises m frames of images, m >1;
the pixel motion measurement index acquisition module is used for calculating and obtaining the pixel motion measurement index of each pixel point of the m1 frame image according to the Y component value of each pixel point of each frame image; wherein, the pixel motion metric index of the jth pixel point of the ith frame image is MD i,j =|Y k+i,j -Y i,j |-(1-α)*Y k+i,j ,Y k+i,j Y component value, k, representing the jth pixel point of the (k + i) th frame image>0,0<i≤m1,i<k+i≤m,0<α<1,1<m1<m;
A motion candidate region obtaining module, configured to obtain a motion candidate region of the m1 frame image according to the pixel motion metric indicator;
the difference image acquisition module is used for acquiring an m2 frame difference image according to the motion candidate area of the m1 frame image; wherein 0 and m2 are-m1;
the initial motion area acquisition module is used for correcting the m2 frame differential image according to the neighborhood U component value and the neighborhood V component value of each pixel point of each frame of differential image and acquiring an initial motion area;
and the moving target area acquisition module is used for carrying out area statistical correction on the initial moving area to acquire a moving target area.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the moving object detection method according to any one of claims 1 to 7.
10. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the moving object detection method according to any one of claims 1 to 7 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911138198.9A CN110910429B (en) | 2019-11-19 | 2019-11-19 | Moving target detection method and device, storage medium and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911138198.9A CN110910429B (en) | 2019-11-19 | 2019-11-19 | Moving target detection method and device, storage medium and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110910429A CN110910429A (en) | 2020-03-24 |
CN110910429B true CN110910429B (en) | 2023-03-17 |
Family
ID=69818164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911138198.9A Active CN110910429B (en) | 2019-11-19 | 2019-11-19 | Moving target detection method and device, storage medium and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110910429B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935454B (en) * | 2020-07-27 | 2022-08-26 | 衡阳市大井医疗器械科技有限公司 | Traffic-saving image stream transmission method and electronic equipment |
CN112164058B (en) * | 2020-10-13 | 2024-09-17 | 东莞市瑞图新智科技有限公司 | Silk screen region coarse positioning method and device for optical filter and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1822086A (en) * | 2005-02-16 | 2006-08-23 | 日本电气株式会社 | Image processing method, display device and its driving method |
JP2009251892A (en) * | 2008-04-04 | 2009-10-29 | Fujifilm Corp | Object detection method, object detection device, and object detection program |
CN102184553A (en) * | 2011-05-24 | 2011-09-14 | 杭州华三通信技术有限公司 | Moving shadow detecting method and device |
CN102509311A (en) * | 2011-11-21 | 2012-06-20 | 华亚微电子(上海)有限公司 | Motion detection method and device |
CN102946504A (en) * | 2012-11-22 | 2013-02-27 | 四川虹微技术有限公司 | Self-adaptive moving detection method based on edge detection |
CN103164847A (en) * | 2013-04-03 | 2013-06-19 | 上海理工大学 | Method for eliminating shadow of moving target in video image |
CN109102523A (en) * | 2018-07-13 | 2018-12-28 | 南京理工大学 | A kind of moving object detection and tracking |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5227629B2 (en) * | 2008-03-25 | 2013-07-03 | 富士フイルム株式会社 | Object detection method, object detection apparatus, and object detection program |
-
2019
- 2019-11-19 CN CN201911138198.9A patent/CN110910429B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1822086A (en) * | 2005-02-16 | 2006-08-23 | 日本电气株式会社 | Image processing method, display device and its driving method |
JP2009251892A (en) * | 2008-04-04 | 2009-10-29 | Fujifilm Corp | Object detection method, object detection device, and object detection program |
CN102184553A (en) * | 2011-05-24 | 2011-09-14 | 杭州华三通信技术有限公司 | Moving shadow detecting method and device |
CN102509311A (en) * | 2011-11-21 | 2012-06-20 | 华亚微电子(上海)有限公司 | Motion detection method and device |
CN102946504A (en) * | 2012-11-22 | 2013-02-27 | 四川虹微技术有限公司 | Self-adaptive moving detection method based on edge detection |
CN103164847A (en) * | 2013-04-03 | 2013-06-19 | 上海理工大学 | Method for eliminating shadow of moving target in video image |
CN109102523A (en) * | 2018-07-13 | 2018-12-28 | 南京理工大学 | A kind of moving object detection and tracking |
Non-Patent Citations (4)
Title |
---|
VIDEO AND IMAGE PROCESSING BASED TECHNIQUES FOR PEOPLE DETECTION AND COUNTING IN CROWDED ENVIRONMENTS;Zeyad Qasim Habeeb Al-zaydi;《research portal》;20170930;1-196 * |
基于SOPC的运动目标图像检测系统设计;范传阳 等;《信号与系统》;20120531;22-26 * |
基于SOPC的运动目标检测系统设计;闫飞;《中国优秀硕士学位论文全文数据库信息科技辑》;20090315(第(2009)03期);I138-729 * |
面向高清视频监控系统的实时运动检测算法;彭爽 等;《计算机工程》;20141130;第40卷(第11期);288-296 * |
Also Published As
Publication number | Publication date |
---|---|
CN110910429A (en) | 2020-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110149482B (en) | Focusing method, focusing device, electronic equipment and computer readable storage medium | |
US8600105B2 (en) | Combining multiple cues in a visual object detection system | |
US8306262B2 (en) | Face tracking method for electronic camera device | |
CN111179302B (en) | Moving target detection method and device, storage medium and terminal equipment | |
CN111723644A (en) | Method and system for detecting occlusion of surveillance video | |
CN111325716A (en) | Screen scratch fragmentation detection method and equipment | |
CN106327488B (en) | Self-adaptive foreground detection method and detection device thereof | |
CN109919002B (en) | Yellow stop line identification method and device, computer equipment and storage medium | |
CN111368587B (en) | Scene detection method, device, terminal equipment and computer readable storage medium | |
CN111325769A (en) | Target object detection method and device | |
CN110599516A (en) | Moving target detection method and device, storage medium and terminal equipment | |
CN113658197B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN110910429B (en) | Moving target detection method and device, storage medium and terminal equipment | |
US20220270266A1 (en) | Foreground image acquisition method, foreground image acquisition apparatus, and electronic device | |
US20190311492A1 (en) | Image foreground detection apparatus and method and electronic device | |
WO2006081018A1 (en) | Object-of-interest image capture | |
CN111402185B (en) | Image detection method and device | |
CN112581481B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN111160340B (en) | Moving object detection method and device, storage medium and terminal equipment | |
CN113409353A (en) | Motion foreground detection method and device, terminal equipment and storage medium | |
CN111539975B (en) | Method, device, equipment and storage medium for detecting moving object | |
CN112101148A (en) | Moving target detection method and device, storage medium and terminal equipment | |
US20130251202A1 (en) | Facial Features Detection | |
CN110765875A (en) | Method, equipment and device for detecting boundary of traffic target | |
CN109558878B (en) | Image recognition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220822 Address after: Floor 12-17, unit 1, building 2, No. 466, Xinyu Road, high tech Zone, Chengdu, Sichuan 610000 Applicant after: Chengdu Lianzhou International Technology Co.,Ltd. Address before: 518000 the 1st and 3rd floors of the south section of building 24 and the 1st-4th floor of the north section of building 28, Shennan Road Science and Technology Park, Nanshan District, Shenzhen City, Guangdong Province Applicant before: TP-LINK TECHNOLOGIES Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |