CN108765463B - Moving target detection method combining region extraction and improved textural features - Google Patents

Moving target detection method combining region extraction and improved textural features Download PDF

Info

Publication number
CN108765463B
CN108765463B CN201810536188.XA CN201810536188A CN108765463B CN 108765463 B CN108765463 B CN 108765463B CN 201810536188 A CN201810536188 A CN 201810536188A CN 108765463 B CN108765463 B CN 108765463B
Authority
CN
China
Prior art keywords
frame
foreground
region
background
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810536188.XA
Other languages
Chinese (zh)
Other versions
CN108765463A (en
Inventor
范新南
薛瑞阳
倪建军
史朋飞
张卓
谢迎娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201810536188.XA priority Critical patent/CN108765463B/en
Publication of CN108765463A publication Critical patent/CN108765463A/en
Application granted granted Critical
Publication of CN108765463B publication Critical patent/CN108765463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a moving target detection method combining region extraction and improved texture features, which comprises the following steps: (1) collecting continuous images in a monitoring video as sampling frames; (2) carrying out background modeling and recovery on each pixel point in the sampling frame by using the sampling information of each pixel point; (3) partitioning the image, extracting a foreground region by using the statistical characteristics of the image block, judging the illumination change of the foreground region, and determining whether secondary judgment of the foreground region is needed; (4) and accurately extracting foreground pixel points in the foreground area. According to the method, the calculation amount of subsequent accurate judgment is greatly reduced by performing rapid foreground region extraction, and two main interferences, namely space displacement interference (leaf shaking and the like) and brightness change interference (illumination change and the like), are eliminated while region extraction is performed; and accurately and efficiently extracting the moving target in the image sequence.

Description

Moving target detection method combining region extraction and improved textural features
Technical Field
The invention relates to a moving target detection method combining region extraction and improved texture features, and belongs to the technical field of visual detection.
Background
The moving target detection technology based on the image sequence is the basis of many high-level computer vision processing behaviors, such as target tracking, behavior understanding, abnormal behavior analysis and the like, and the integrity and the effectiveness of a moving target detection result are important for subsequent research. Most of the existing moving target detection algorithms directly judge image frames needing to be processed point by point so as to realize accurate detection of moving targets. The point-by-point judgment method is very easily influenced by noise points (illumination, dynamic background, imaging equipment errors and the like), a large number of dynamic noise points are judged to be the foreground by mistake, in some moving scenes, the occupied area of a moving target is small, and a large number of computing resources are wasted in some background areas without obvious foreground characteristics by point-by-point judgment. In addition, when noise interference elimination is performed, most of the existing algorithms uniformly consider all interference points (illumination, leaf shaking and noise points) as sparse noise for elimination, and although partial interference can be eliminated to a certain extent, the method without considering the noise characteristics has a limited effect on eliminating different types of noise.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a moving target detection method combining region extraction and improved textural features, and the method greatly reduces the subsequent calculation amount for accurate judgment by performing rapid foreground region extraction, and eliminates two main interferences, namely space displacement interference (leaf shaking and the like) and brightness change interference (illumination change and the like), while performing region extraction; and accurately and efficiently extracting the moving target in the image sequence.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention relates to a moving target detection method combining region extraction and improved texture features, which comprises the following steps:
(1) collecting continuous images in a monitoring video as sampling frames;
(2) carrying out background modeling and recovery on each pixel point in the sampling frame by using the sampling information of each pixel point;
(3) partitioning the image, extracting a foreground region by using the statistical characteristics of the image block, judging the illumination change of the foreground region, and determining whether secondary judgment of the foreground region is needed; and if the secondary judgment is needed, using the improved LBP texture characteristics to carry out the secondary judgment.
(4) And accurately extracting foreground pixel points in the foreground area.
In the step (2), a background model is established for each pixel point according to the sampling information of each pixel point, and the established background model comprises the historical frame gray value, the weight and the duration of the pixel point at the position;
assuming that there is a pixel with a position (x, y), I (x, y) ═ I is used1(x,y),I2(x,y),…,Ii(x,y),…,In(x, y) } as the background model for the point, where Ii(x,y)=[gi(x,y),weighti,time],gi(x, y) represents the gray value of the pixel at the training frame (x, y) of the ith frame, and time represents giNumber of occurrences of (x, y), weightiN represents the total number of frames sampled, weighted for that point. Then, initializing a specific background, wherein the background modeling and recovery process is as follows:
(2a) initializing background model, making I (x, y) ═ I1(x,y)},I1(x,y)=[g1(x,y),1,1];
(2b) For the pixel point of the new training frame, if a certain element I exists in the model I (x, y)iGradation attribute g of (x, y)i(x, y) is equal to the gray level of the pixel of the new frame, and the step (2c) is skipped; otherwise, jumping to the step (2 d);
(2c) let IiAdding 1 to the time value of (x, y), traversing I (x, y), adjusting the weight attribute values of all elements in the model, reading the next frame of training frame, if the next frame exists, jumping to the step (2b), otherwise, finishing the algorithm;
(2d) adding the newly added pixel points into a model I (x, y), setting the time of the newly added pixel points to be 1, traversing I (x, y), adjusting weight attribute values of all elements in the model, reading a next frame of training frame, if the next frame exists, jumping to (2b), and if not, finishing the algorithm;
the weight value adjustment formula is as follows:
Figure GDA0003143031040000021
the background recovery formula is:
Figure GDA0003143031040000031
where α is an adjustment coefficient, is a constant, and is usually 1. t represents the t-th frame training frame.
The specific method of the step (3) is as follows:
(3a) for an image, dividing the image into a plurality of image blocks with the same size, and counting gray information of the image block area in the front and rear continuous N frames of images;
(3b) taking the image block region characteristics as input characteristics of Gaussian kernel density estimation, and calculating the probability of each block region containing a foreground region;
(3c) and judging illumination change of the image of which the foreground region is extracted currently, and determining whether secondary foreground region judgment is carried out according to a judgment result.
In the step (3b), the formula for performing foreground region probability estimation by using gaussian kernel density estimation is as follows:
Figure GDA0003143031040000032
wherein N represents the number of frames, regioniFeatures of image blocks, σ, in the ith frame imageiRepresenting the kernel width, calculating the kernel width using the median of the absolute differences of the samples between adjacent framestAnd representing the mean value characteristic of pixel points contained in any image block in the image frame to be processed at the moment t.
In the step (3c), the formula of the illumination judgment is as follows:
Figure GDA0003143031040000033
wherein G is1And G2Respectively representing image blocks with the same position and the same size of the background frame and the current frame to be detected, if the result of the formula is close to 1, not performing secondary judgment, otherwise, performing secondary judgment by using improved LBP texture characteristicsPerforming secondary judgment;
the improved LBP texture feature formula is:
Figure GDA0003143031040000041
Figure GDA0003143031040000042
Figure GDA0003143031040000043
wherein r iscAnd gcRespectively representing reference frame and to-be-processed frame pixel point (x)c,yc) Gray value of (d), rpAnd gpRespectively representing pixel points (x) in two framesc,yc) P neighborhood pixel points, Th1The value is determined through experiments for a preset noise tolerance threshold value;
for the region with unchanged improved texture features in the foreground region, the region is considered as the background region, otherwise the region is considered as the foreground region.
In the step (4), the method for accurately extracting the foreground pixel points comprises the following steps:
(4a) constructing a window W with the size of mxn, and setting two attributes W for all pixel points in the foreground regions(x, y) and wf(x, y) for recording the number of times the pixel point (x, y) is processed and the number of times it is judged as foreground, respectively, and initializing both to 0;
(4b) calculating improved texture characteristics corresponding to pixel points in a window W in a foreground area of a frame to be processed, and counting frequency numbers of all different texture characteristic values in the window to obtain an improved LBP texture characteristic histogram HcThe same operation is performed on the background frames at the corresponding positions to obtain a histogram HrNormalizing the histogram and letting the property w of the pixels in the windows=ws+1;
(4c) Calculating the chi-squared distance between two normalized histogramsIf the distance is less than 1, the distance is considered to be smaller, and the attributes w of all the pixels in the current window in the frame to be processed are enabled to be smallerfSelf-increment by 1, otherwise, the result is unchanged;
(4d) overlapping the sliding windows by step size 1, and repeating (4a) and (4b) until all pixels in the foreground region are operated;
(4e) for each pixel point, according to wfAnd wsCalculating the probability P of the pixel point belonging to the foreground pixel pointt(x, y), the greater the value, the greater the likelihood that it is considered to be the foreground, while updating the background.
Probability PtThe formula for the calculation of (x, y) is:
Figure GDA0003143031040000051
the background update formula is: b ist(x,y)=Bt-1(x,y)+[1-Pt(x,y)][It(x,y)-Bt-1(x,y)]
Wherein, Bt(x, y) and Bt-1(x, y) are gray values of pixel point background models of the current frame and the previous frame respectively, ItAnd (x, y) is the gray value of the current pixel point.
The invention has the following beneficial effects:
(1) when the background modeling and the background recovery are carried out, the influence of each pixel point on the background is generally considered to be different, so the weight value should not be as large, and the size of the weight value mainly depends on the change of the weight value in the time domain. Here, the weight of each pixel point is considered to be mainly influenced by three factors: the time difference delta t between the current frame and the historical frame, the gray difference delta g between the current frame and the historical frame and the duration time. The background modeling method considers that the influence of a certain pixel point on a background model is related to all pixel points in the existing background model when the weight is adjusted, the more 'new' pixel points are, the more times the pixel points are appeared, the larger the weight is, and the larger the gray difference between the pixel points and the existing pixels in the model is, the smaller the weight is. After the background model initialization is completed by using the first frame image, every time a new training frame is added, the weight adjustment is performed on all elements in the current background model.
(2) In order to reduce the calculation amount of subsequent accurate extraction, the invention adopts the idea of region extraction, thereby not only solving the problem of large time consumption of the traditional algorithm due to point-by-point judgment, but also solving the problem of partial dynamic background interference. The change of the moving target can change the overall characteristics of the image area through which the moving target passes, and the spatial displacement interference of leaf shaking does not change the characteristics, so the method eliminates the interference of partial dynamic background by using the invariance of the area characteristics, and simultaneously, the approximate area where the foreground target is located is quickly extracted by the non-parametric method of kernel density estimation. In order to further solve the influence of illumination change, the invention also provides an improved LBP texture characteristic operator, which eliminates the interference of illumination change through secondary judgment to complete the extraction of the foreground area, thereby not only greatly reducing the calculation amount of subsequent foreground pixel point extraction, but also effectively eliminating most dynamic interference.
(3) Realizing accurate extraction of foreground pixel points
In the extracted foreground region, a sliding window is used for overlapping sliding, improved LBP texture features in secondary judgment of the foreground region are fully utilized, and accurate extraction of pixel points is achieved through multiple matching of the texture features of the region to which one pixel point belongs.
Drawings
Fig. 1 is a flowchart of a moving object detection method combining region extraction and texture feature improvement.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
As shown in fig. 1, first acquiring continuous image frames in a surveillance video; for the gray value of each pixel point, the weight of different gray values is adjusted by sampling the time difference between the gray value and the historical frame, the gray difference, the duration and other information, so that background modeling is realized, and a more accurate background model is restored; partitioning an image, extracting a foreground region by using the invariance of the regional characteristics of an image block, eliminating partial dynamic interference, and judging the illumination change of the foreground region so as to determine whether the secondary judgment of the foreground region is required to be carried out by improved texture characteristics; the foreground pixel points are accurately extracted in the foreground area, so that the consumption of computing resources is greatly reduced, and great practical significance and application value are achieved.
The invention relates to a moving target detection method combining region extraction and improved texture features, which comprises the following specific steps:
(1) collecting continuous images in monitoring video as sampling frames
Video is extracted in real time from a monitoring camera (an expressway, a traffic main road and a natural protection area) and is used as an image sampling sequence, and the image sequence is input information of the invention.
(2) Background modeling and recovery
By observing the change of the gray level of a pixel at any specified position in the image in a time domain, analyzing the influence of each factor on a background model of the pixel at the position, adjusting the weight, and realizing background modeling and recovery. The established background model comprises the historical frame gray value, the weight value and the duration of the pixel point at the position. Assuming that there is a pixel with a position (x, y), I (x, y) ═ I is used1(x,y),I2(x,y),…,Ii(x,y),…,In(x, y) } as the background model for the point, where Ii(x,y)=[gi(x,y),weighti,time],gi(x, y) represents the gray value of the pixel at the training frame (x, y) of the ith frame, and time represents giNumber of occurrences of (x, y), weightiN represents the total number of frames sampled, weighted for that point. Then, initializing a specific background, wherein the background modeling and recovery process is as follows:
(2a) initializing background model, making I (x, y) ═ I1(x,y)},I1(x,y)=[g1(x,y),1,1];
(2b) For the pixel point of the new training frame, if a certain element I exists in the model I (x, y)iGradation attribute g of (x, y)i(x, y) is equal to the gray level of the pixel of the new frame, and the step (2c) is skipped; otherwise, jumping to the step (2 d);
(2c) let IiAdding 1 to the time value of (x, y), traversing I (x, y), adjusting the weight attribute values of all elements in the model, reading the next frame of training frame, if the next frame exists, jumping to the step (2b), otherwise, finishing the algorithm;
(2d) adding the newly added pixel points into a model I (x, y), setting the time of the newly added pixel points to be 1, traversing I (x, y), adjusting weight attribute values of all elements in the model, reading a next frame of training frame, if the next frame exists, jumping to (2b), and if not, finishing the algorithm.
The weight value adjustment formula is as follows:
Figure GDA0003143031040000071
the background recovery formula is:
Figure GDA0003143031040000072
where α is an adjustment coefficient, is a constant, and is usually 1. t represents the t-th frame training frame.
(3) Foreground region extraction
And analyzing the regional characteristic change conditions of the foreground target and the dynamic background to realize the extraction of the foreground region and the elimination of the dynamic background. Most moving object detection algorithms directly judge image frames needing to be processed point by point so as to realize accurate detection of moving objects. The point-by-point judgment method is very easily influenced by noise points (illumination, dynamic background, imaging equipment errors and the like), a large number of dynamic noise points are judged to be the foreground by mistake, in some moving scenes, the occupied area of a moving target is small, and a large number of computing resources are wasted in some background areas without obvious foreground characteristics by point-by-point judgment.
Usually, the spatial position of the moving object will change continuously, and although the spatial position of the dynamic background of the spatial displacement class, such as a swaying leaf, also changes, the change only occurs in a fixed range, and in this area range, although most of the pixel points change, the overall characteristics of the moving object can be considered not to change too much. If the consideration is from the pixel aspect alone, the grey scale that the prospect and leaf rocked the homoenergetic and result in the pixel changes, can't judge prospect and interference. If the image is divided into blocks, and the change situation of the regional characteristics of each image block is considered, which region may contain the foreground can be judged. The method comprises the following concrete steps:
(3a) for an image, dividing the image into a plurality of image blocks with the same size, and counting gray information of the image block area in the front and back continuous N frames of images;
(3b) and taking the image block region characteristics as input characteristics of Gaussian Kernel Density Estimation (KDE), and calculating the probability size of each block region containing the foreground target.
The formula for foreground region probability estimation using gaussian kernel density estimation is:
Figure GDA0003143031040000081
wherein N represents the number of frames, regioniFeatures of image blocks, σ, in the ith frame imageiRepresents the kernel width, which is calculated here using the median of the absolute differences of the samples between adjacent frames. regiontAnd representing the mean value characteristic of pixel points contained in any image block in the image frame to be processed at the moment t.
(3c) And judging illumination change of the image of which the foreground region is extracted currently, and determining whether secondary foreground region judgment is carried out according to a judgment result. The formula for the illumination determination is:
Figure GDA0003143031040000091
wherein G is1And G2Respectively representing image blocks with the same position and the same size of the background frame and the current frame to be detected. If the result of the above equation approaches 1, the second decision is not made, otherwise, the second decision is made by using the improved LBP texture feature.
The improved LBP texture feature formula is:
Figure GDA0003143031040000092
Figure GDA0003143031040000093
Figure GDA0003143031040000094
wherein r iscAnd gcRespectively representing reference frame and to-be-processed frame pixel point (x)c,yc) Gray value of (d), rpAnd gpRespectively representing pixel points (x) in two framesc,yc) P neighborhood (usually 8 neighborhoods) pixel points, Th1The noise tolerance threshold value is determined through experiments and can be any value between 10 and 15 generally.
For the region with unchanged improved texture features in the foreground region, the region is considered as the background region, otherwise the region is considered as the foreground region.
(4) Accurate extraction of foreground pixels
And the foreground pixel points based on the improved LBP texture characteristics are accurately extracted in the foreground area, so that an LBP texture characteristic graph used in secondary judgment of the foreground area can be fully utilized. And judging the proportion of the times of judging each pixel point as the foreground to the total times by matching the texture characteristic histogram between the frame image to be detected and the background image, and realizing the probability value calculation of the foreground pixel points. The method comprises the following steps:
(4a) constructing a window W with the size of mxn, and setting two attributes W for all pixel points in the foreground regions(x, y) and wf(x, y) for recording the number of times the pixel point (x, y) is processed and the number of times it is judged as foreground, respectively, and initializing both to 0;
(4b) calculating improved texture characteristics corresponding to pixel points in a window W in a foreground area of a frame to be processed, and counting frequency numbers of all different texture characteristic values in the window to obtain an improved LBP characteristic histogram HcThe same operation is carried out on the background frames at the corresponding positions to be straightSquare HrNormalizing the histogram and letting the property w of the pixels in the windows=ws+1;
(4c) Calculating the chi-square distance between the two normalized histograms, and if the distance is less than 1, determining that the distance is small, and enabling the attributes w of all pixels in the current window in the frame to be processedfSelf-increment by 1, otherwise, the result is unchanged;
(4d) overlapping the sliding windows by step size 1, and repeating (4a) and (4b) until all pixels in the foreground region are operated;
(4e) for each pixel point, according to wfAnd wsCalculating the probability P of the pixel point belonging to the foreground pixel pointt(x, y), the greater the value, the greater the likelihood that it is considered to be the foreground, while updating the background;
probability PtThe formula for the calculation of (x, y) is:
Figure GDA0003143031040000101
the background update formula is: b ist(x,y)=Bt-1(x,y)+[1-Pt(x,y)][It(x,y)-Bt-1(x,y)]
Wherein B ist(x, y) and Bt-1(x, y) are gray values of pixel point background models of the current frame and the previous frame respectively, ItAnd (x, y) is the gray value of the current pixel point.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. A moving object detection method combining region extraction and improved texture features is characterized in that: the method comprises the following steps:
(1) collecting continuous images in a monitoring video as sampling frames;
(2) carrying out background modeling and recovery on each pixel point in the sampling frame by using the sampling information of each pixel point;
in the step (2), a background model is established for each pixel point according to the sampling information of each pixel point;
assuming that there is a pixel with a position (x, y), I (x, y) ═ I is used1(x,y),I2(x,y),…,Ii(x,y),…,In(x, y) } as the background model for the point, where Ii(x,y)=[gi(x,y),weighti,time],gi(x, y) represents the gray value of the pixel at the training frame (x, y) of the ith frame, and time represents giNumber of occurrences of (x, y), weightiThe weight value of the point is n, and the total frame number of the sampling is represented; then, initializing a specific background, wherein the background modeling and recovery process is as follows:
(2a) initializing background model, making I (x, y) ═ I1(x,y)},I1(x,y)=[g1(x,y),1,1];
(2b) For the pixel point of the new training frame, if a certain element I exists in the model I (x, y)iGradation attribute g of (x, y)i(x, y) is equal to the gray level of the pixel of the new frame, and the step (2c) is skipped; otherwise, jumping to the step (2 d);
(2c) let IiAdding 1 to the time value of (x, y), traversing I (x, y), adjusting the weight attribute values of all elements in the model, reading the next frame of training frame, if the next frame exists, jumping to the step (2b), otherwise, finishing the algorithm;
(2d) adding the newly added pixel points into a model I (x, y), setting the time of the newly added pixel points to be 1, traversing I (x, y), adjusting weight attribute values of all elements in the model, reading a next frame of training frame, if the next frame exists, jumping to (2b), and if not, finishing the algorithm;
the weight value adjustment formula is as follows:
Figure FDA0003143031030000011
the background recovery formula is:
Figure FDA0003143031030000021
wherein, α is an adjustment coefficient, is a constant, and is usually 1; t represents the t-th frame training frame;
(3) partitioning the image, extracting a foreground region by using the statistical characteristics of the image block, judging the illumination change of the foreground region, and determining whether secondary judgment of the foreground region is needed; if the secondary judgment is needed, the improved LBP texture characteristics are used for secondary judgment;
(4) and accurately extracting foreground pixel points in the foreground area.
2. The method of claim 1, wherein the method comprises: the specific method of the step (3) is as follows:
(3a) for an image, dividing the image into a plurality of image blocks with the same size, and counting gray information of the image block area in front and back continuous N frames of images;
(3b) taking the image block region characteristics as input characteristics of Gaussian kernel density estimation, and calculating the probability of each block region containing a foreground region;
(3c) and judging illumination change of the image of which the foreground region is extracted currently, and determining whether secondary foreground region judgment is carried out according to a judgment result.
3. The method of claim 2, wherein the method comprises: in the step (3b), the formula for performing foreground region probability estimation by using gaussian kernel density estimation is as follows:
Figure FDA0003143031030000022
wherein N represents the number of frames, regioniFeatures of image blocks, σ, in the ith frame imageiRepresenting kernel width, using adjacent inter-frame samplesMedian absolute difference calculation of kernel width, regiontAnd representing the mean value characteristic of pixel points contained in any image block in the image frame to be processed at the moment t.
4. The method of claim 2, wherein the method comprises: in the step (3c), the formula of the illumination judgment is as follows:
Figure FDA0003143031030000031
wherein G is1And G2Respectively representing image blocks with the same position and the same size of the background frame and the current frame to be detected, if the result of the formula is close to 1, not performing secondary judgment, otherwise, performing secondary judgment by using improved LBP texture characteristics;
the improved LBP texture feature formula is:
Figure FDA0003143031030000032
Figure FDA0003143031030000033
Figure FDA0003143031030000034
wherein r iscAnd gcRespectively representing reference frame and to-be-processed frame pixel point (x)c,yc) Gray value of (d), rpAnd gpRespectively representing pixel points (x) in two framesc,yc) P neighborhood pixel points, Th1The value is determined through experiments for a preset noise tolerance threshold value;
for the region with unchanged improved texture features in the foreground region, the region is considered as the background region, otherwise the region is considered as the foreground region.
5. The method of claim 1, wherein the method comprises: in the step (4), the method for accurately extracting the foreground pixel points comprises the following steps:
(4a) constructing a window W with the size of mxn, and setting two attributes W for all pixel points in the foreground regions(x, y) and wf(x, y) for recording the number of times the pixel point (x, y) is processed and the number of times it is judged as foreground, respectively, and initializing both to 0;
(4b) calculating improved texture characteristics corresponding to pixel points in a window W in a foreground area of a frame to be processed, and counting frequency numbers of all different texture characteristic values in the window to obtain an improved LBP texture characteristic histogram HcThe same operation is performed on the background frames at the corresponding positions to obtain a histogram HrNormalizing the histogram and letting the property w of the pixels in the windows=ws+1;
(4c) Calculating the chi-square distance between the two normalized histograms, and if the distance is less than 1, determining that the distance is small, and enabling the attributes w of all pixels in the current window in the frame to be processedfSelf-increment by 1, otherwise, the result is unchanged;
(4d) overlapping the sliding windows by step size 1, and repeating (4a) and (4b) until all pixels in the foreground region are operated;
(4e) for each pixel point, according to wfAnd wsCalculating the probability P of the pixel point belonging to the foreground pixel pointt(x, y), the greater the value, the greater the likelihood that it is considered to be the foreground, while updating the background;
probability PtThe formula for the calculation of (x, y) is:
Figure FDA0003143031030000041
6. the method of claim 5, wherein the method comprises:
the background update formula is: b ist(x,y)=Bt-1(x,y)+[1-Pt(x,y)][It(x,y)-Bt-1(x,y)]
Wherein, Bt(x, y) and Bt-1(x, y) are gray values of pixel point background models of the current frame and the previous frame respectively, ItAnd (x, y) is the gray value of the current pixel point.
CN201810536188.XA 2018-05-30 2018-05-30 Moving target detection method combining region extraction and improved textural features Active CN108765463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810536188.XA CN108765463B (en) 2018-05-30 2018-05-30 Moving target detection method combining region extraction and improved textural features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810536188.XA CN108765463B (en) 2018-05-30 2018-05-30 Moving target detection method combining region extraction and improved textural features

Publications (2)

Publication Number Publication Date
CN108765463A CN108765463A (en) 2018-11-06
CN108765463B true CN108765463B (en) 2021-11-16

Family

ID=64003963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810536188.XA Active CN108765463B (en) 2018-05-30 2018-05-30 Moving target detection method combining region extraction and improved textural features

Country Status (1)

Country Link
CN (1) CN108765463B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033488B (en) * 2019-04-09 2023-09-15 深圳市梦网视讯有限公司 Self-adaptive light source direction analysis method and system based on compressed information
CN111583293B (en) * 2020-05-11 2023-04-11 浙江大学 Self-adaptive image segmentation method for multicolor double-photon image sequence
CN113658238B (en) * 2021-08-23 2023-08-08 重庆大学 Near infrared vein image high-precision matching method based on improved feature detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN107452005A (en) * 2017-08-10 2017-12-08 中国矿业大学(北京) A kind of moving target detecting method of jointing edge frame difference and gauss hybrid models
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN107452005A (en) * 2017-08-10 2017-12-08 中国矿业大学(北京) A kind of moving target detecting method of jointing edge frame difference and gauss hybrid models
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于非负矩阵分解与相似性分析的运动目标检测;范新南等;《计算机与现代化》;20180420(第272期);第37-41页 *

Also Published As

Publication number Publication date
CN108765463A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN109753975B (en) Training sample obtaining method and device, electronic equipment and storage medium
EP1697901B1 (en) Method for modeling background and foreground regions
CN107256225B (en) Method and device for generating heat map based on video analysis
JP4964159B2 (en) Computer-implemented method for tracking an object in a sequence of video frames
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
US10373320B2 (en) Method for detecting moving objects in a video having non-stationary background
CN111369597B (en) Particle filter target tracking method based on multi-feature fusion
CN109685045B (en) Moving target video tracking method and system
CN111723644A (en) Method and system for detecting occlusion of surveillance video
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN106157330B (en) Visual tracking method based on target joint appearance model
CN107944354B (en) Vehicle detection method based on deep learning
CN110415260B (en) Smoke image segmentation and identification method based on dictionary and BP neural network
CN102346854A (en) Method and device for carrying out detection on foreground objects
CN105513053A (en) Background modeling method for video analysis
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
Gao et al. Agricultural image target segmentation based on fuzzy set
CN113379789B (en) Moving target tracking method in complex environment
CN110751670B (en) Target tracking method based on fusion
CN107871315B (en) Video image motion detection method and device
CN107704864B (en) Salient object detection method based on image object semantic detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant