CN111062974B - Method and system for extracting foreground target by removing ghost - Google Patents

Method and system for extracting foreground target by removing ghost Download PDF

Info

Publication number
CN111062974B
CN111062974B CN201911181481.XA CN201911181481A CN111062974B CN 111062974 B CN111062974 B CN 111062974B CN 201911181481 A CN201911181481 A CN 201911181481A CN 111062974 B CN111062974 B CN 111062974B
Authority
CN
China
Prior art keywords
pixel
background
foreground
points
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911181481.XA
Other languages
Chinese (zh)
Other versions
CN111062974A (en
Inventor
张军
雷民
金淼
陈习文
卢冰
王斯琪
王旭
陈卓
郭鹏
周玮
汪泉
付济良
聂高宁
齐聪
郭子娟
匡义
余雪芹
刘俊
朱赤丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI filed Critical State Grid Corp of China SGCC
Priority to CN201911181481.XA priority Critical patent/CN111062974B/en
Publication of CN111062974A publication Critical patent/CN111062974A/en
Application granted granted Critical
Publication of CN111062974B publication Critical patent/CN111062974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Abstract

The invention discloses a method and a system for extracting a foreground target by removing ghosts, and belongs to the technical field of foreground target detection. The method comprises the following steps: acquiring video stream information, selecting pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selecting a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigning values to the sample library, and generating a background model; selecting any frame in the video stream information, and carrying out foreground and background classification on any pixel point of any frame; updating the pixel value of any pixel point to a sample library of the background model according to a preset probability; and determining any pixel point as a ghost pixel point, and removing the ghost pixel point to extract the foreground target. The dynamic degree of the pixel points is evaluated by introducing a dynamic model and the flicker degree, and the sampling distance threshold value and the matching threshold value are updated in a self-adaptive manner, so that the foreground extraction accuracy is improved, and the omission ratio is reduced.

Description

Method and system for extracting foreground target by removing ghost
Technical Field
The present invention relates to the technical field of foreground object detection, and more particularly, to a method and system for extracting a foreground object by removing ghosts.
Background
Foreground object detection is that a video sequence of a camera is automatically segmented into a foreground object and a background of interest, and as a result, the subsequent research such as object tracking, counting, recognition and classification is based on the result. How to deal with the variability of the scene (such as dynamic background, shadow, etc.) is the biggest challenge faced by this technology at this stage.
The mainstream foreground object detection methods at present include a frame difference method, an optical flow method and a background subtraction method. The background subtraction method becomes a current research hotspot due to the comprehensive advantages of real-time performance and accuracy, and the method takes establishing a background model with high accuracy and strong adaptability as a core task. Commonly used background models include parametric background models and non-parametric background models. The GMM models a dynamic background by using a plurality of Gaussian probability density functions, but the GMM models the dynamic background by using the color intensity of each pixel, but the GMM models the dynamic background with high computational complexity and poor real-time performance, and is difficult to eliminate false objects caused by the dynamic background. The non-parametric Background model is represented by Kernel Density Estimation (KDE), a CodeBook model (CodeBook) and Visual Background extraction (ViBe), and unlike the parametric model, the KDE estimates the Background probability Density according to the historical pixel values of each pixel position, but the update strategy of first-in first-out observation values makes the method unable to adapt to long-period events. The CodeBook clusters pixel values into code words and stores the code words in a local dictionary, and learns and updates the model by combining the time information of the pixels, but the background model trained by only depending on images with specific frame numbers is difficult to adapt to complex conditions such as illumination change, irregular background motion and the like. The ViBe algorithm establishes a background model by randomly clustering neighborhood pixel values, firstly matches the current frame pixels with the corresponding background model, then carries out foreground and background classification on the pixel points by utilizing a global fixed threshold value, and finally carries out random replacement on the background model at a certain updating rate.
The ViBe algorithm has the characteristics of simple model, small calculated amount, high processing speed, high detection precision and the like, but has some defects:
1) the strategy of classifying and updating by fixed values is difficult to adapt to dynamic backgrounds such as water flow, branch and leaf shaking and the like;
2) if a foreground object exists in a single frame used for initializing a background, a ghost image appears in subsequent detection.
The Gaussian mixture model is high in calculation complexity and poor in real-time performance, false targets caused by dynamic backgrounds are difficult to eliminate, the parameter Background models represent Kernel Density Estimation (KDE), a CodeBook model (CodeBook) and Visual Background extraction (ViBe), the method cannot adapt to long-period events due to the strategy of updating the observation value in a first-in first-out mode, the CodeBook only depends on the Background models trained by images with specific frame numbers, and the complex situations such as illumination changes and irregular Background motions are difficult to adapt.
Disclosure of Invention
The invention provides a method for extracting a foreground target by removing ghosts, which aims to solve the problems, and comprises the following steps:
acquiring video stream information, selecting pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selecting a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigning values to the sample library, and generating a background model;
selecting any frame in the video stream information, and carrying out foreground and background classification on any pixel point of any frame;
selecting any pixel point of any frame classified as the background, determining the pixel value of any pixel point, randomly replacing a sample of a corresponding any position point in the background model by the pixel value of any pixel point according to a preset probability, and updating the pixel value of any pixel point into a sample library of the background model according to the preset probability;
the method comprises the steps of obtaining pixel significant values of pixel values of any pixel points of any frame classified as a foreground and significant values of pixel values of a plurality of any position points of an updated background model sample library, obtaining a significant difference value of each pixel point by subtracting an absolute value from a visual significant image of a current frame and a background model, and if the foreground pixel points exist so that the significance degree is larger than or equal to a current threshold value, removing ghost pixel points and extracting a foreground target.
Optionally, the method further includes:
and reclassifying the ghost pixel points, dividing the ghost pixel points into the background, and replacing the pixel values of the ghost pixel points with the pixel values of any corresponding position points in the background model.
Optionally, the preset probability is 1/T, and T is an update time sampling factor;
the updating time sampling factor is obtained according to the background dynamics model and the flicker degree value;
and the updating time sampling factor is replaced after the pixel value of the ghost pixel point is replaced with the pixel value of the corresponding arbitrary position point in the background model.
Optionally, the front/background classification is performed on any pixel point of any frame, specifically:
and defining a two-dimensional color space which takes the pixel value of any pixel point of any frame as the center and the sampling distance threshold as the radius, and if the number of samples of the pixel value of the corresponding position point in the background model sample library falling in the two-dimensional color space is less than a preset matching threshold, taking any pixel point as the background, or else, taking the pixel value as the foreground.
Alternatively to this, the first and second parts may,
the sampling distance threshold;
updating according to the flicker degree of each pixel and the minimum distance between the pixel point and the center of the two-dimensional color space
. The invention also provides a system for extracting the foreground target by removing the ghost, which comprises the following steps:
the sampling module is used for acquiring video stream information, selecting pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selecting a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigning values to the sample library and generating a background model;
the classification module selects any frame in the video stream information and classifies the foreground and the background of any pixel point of any frame;
the updating module selects any pixel point of any frame classified as the background, determines the pixel value of any pixel point, randomly replaces the sample of any position point in the background model by the pixel value of any pixel point according to the preset probability, and updates the pixel value of any pixel point to the sample library of the background model according to the preset probability;
the identification module is used for acquiring pixel significant values of pixel values of any pixel points of any frame classified as the foreground and significant values of pixel values of a plurality of any position points of the updated background model sample library, taking an absolute value by subtracting the visual significant images of the current frame and the background model to acquire a significant difference value of each pixel point, if the foreground pixel points exist so that the significance degree is more than or equal to the current threshold value, removing ghost pixel points to extract the foreground target.
Optionally, the identification module is further configured to:
and reclassifying the ghost pixel points, dividing the ghost pixel points into the background, and replacing the pixel values of the ghost pixel points with the pixel values of any corresponding position points in the background model.
Optionally, the preset probability is 1/T, and T is an update time sampling factor;
the updating time sampling factor is obtained according to the background dynamics model and the flicker degree value;
and the updating time sampling factor is replaced after the pixel value of the ghost pixel point is replaced with the pixel value of the corresponding arbitrary position point in the background model.
Optionally, the front/background classification is performed on any pixel point of any frame, specifically:
and defining a two-dimensional color space which takes the pixel value of any pixel point of any frame as the center and the sampling distance threshold as the radius, and if the number of samples of the pixel value of the corresponding position point in the background model sample library falling in the two-dimensional color space is less than a preset matching threshold, taking any pixel point as the background, or else, taking the pixel value as the foreground.
Alternatively to this, the first and second parts may,
the sampling distance threshold and the matching threshold;
updating according to the flicker degree of each pixel and the minimum distance between the pixel point and the center of the two-dimensional color space
. Compared with the prior art, the invention has the following beneficial effects:
the dynamic degree of the pixel points is evaluated by introducing a dynamic model and the flicker degree, and the sampling distance threshold value and the matching threshold value are updated in a self-adaptive manner, so that the missing rate is reduced while the accuracy rate of foreground extraction is improved;
according to the invention, the time sub-sampling factors of each pixel point are updated in a self-adaptive manner through the dynamic degree, so that the accuracy of the background model in a complex scene is improved;
the frame statistics of the invention takes the significance value of each pixel position as the basis of ghost judgment, and updates the background model for the pixel points identified as ghosts according to the matching threshold, thereby achieving the purpose of rapidly eliminating the ghosts.
Drawings
FIG. 1 is a flow chart of a method for foreground object extraction using ghost removal in accordance with the present invention;
FIG. 2 is a flowchart of an embodiment of a method for foreground object extraction using ghost removal according to the present invention;
FIG. 3 is a flowchart illustrating the front/background classification of any pixel point of any frame according to an embodiment of the method for extracting a foreground object by removing ghosts of the present invention;
FIG. 4 is a flowchart illustrating a method for extracting a foreground object by removing ghosts according to an embodiment of the present invention, wherein any pixel point is determined to be a ghost pixel point;
FIG. 5 is a diagram of a system for foreground object extraction using ghost removal according to the present invention.
Detailed Description
The exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, however, the present invention may be embodied in many different forms and is not limited to the embodiments described herein, which are provided for complete and complete disclosure of the present invention and to fully convey the scope of the present invention to those skilled in the art. The terminology used in the exemplary embodiments illustrated in the accompanying drawings is not intended to be limiting of the invention. In the drawings, the same units/elements are denoted by the same reference numerals.
Unless otherwise defined, terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Further, it will be understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense.
The invention provides a method for extracting a foreground target by removing ghosts, which comprises the following steps of:
acquiring video stream information, selecting pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selecting a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigning values to the sample library, and generating a background model;
selecting any frame in the video stream information, and carrying out foreground and background classification on any pixel point of any frame;
selecting any pixel point of any frame classified as the background, determining the pixel value of any pixel point, randomly replacing a sample of a corresponding any position point in the background model by the pixel value of any pixel point according to a preset probability, and updating the pixel value of any pixel point into a sample library of the background model according to the preset probability;
the method comprises the steps of obtaining pixel significant values of pixel values of any pixel points of any frame classified as a foreground and significant values of pixel values of a plurality of any position points of an updated background model sample library, obtaining a significant difference value of each pixel point by subtracting an absolute value from a visual significant image of a current frame and a background model, and if the foreground pixel points exist so that the significance degree is larger than or equal to a current threshold value, removing ghost pixel points and extracting a foreground target.
And reclassifying the ghost pixel points, dividing the ghost pixel points into the background, and replacing the pixel values of the ghost pixel points with the pixel values of any corresponding position points in the background model.
The preset probability is 1/T, and T is an update time sampling factor;
the updating time sampling factor is obtained according to the background dynamics model and the flicker degree value;
and the updating time sampling factor is replaced after the pixel value of the ghost pixel point is replaced with the pixel value of the corresponding arbitrary position point in the background model.
Performing front/background classification on any pixel point of any frame, specifically:
and defining a two-dimensional color space which takes the pixel value of any pixel point of any frame as the center and the sampling distance threshold as the radius, and if the number of samples of the pixel value of the corresponding position point in the background model sample library falling in the two-dimensional color space is less than a preset matching threshold, taking any pixel point as the background, or else, taking the pixel value as the foreground.
And sampling the distance threshold and the matching threshold, and updating according to the flicker degree of each pixel and the minimum distance between the pixel point and the center of the two-dimensional color space.
The present invention will be further illustrated with reference to the following examples;
as shown in fig. 2, video stream information is obtained, pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information are selected as a sample library, a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image are selected, and the sample library is assigned to generate a background model, specifically, the background model is generated;
step 1.1, inputting video stream information ═ { I ═ I0,I1,…,Ii… } (wherein I isiFor the image of the i-th frame,
Figure GDA0003389395690000071
wherein
Figure GDA0003389395690000072
Representing the pixel value at the (s, t) position in the image, the image size is M × N), and first, it is determined whether or not the image is an initial frame image. If the image is the initial frame image, the procedure goes to step 1.2. If not, go to step 2.1.
Step 1.2, with the video initial frame (I)iI ═ 0) pixel values of the image as a sample library, the background model is initialized. By passing from
Figure GDA0003389395690000073
P neighborhood Np(s, t) selecting N pixel values with equal probability to respectively assign background samples to the positions (s, t):
B={B(s,t)|1≤s≤M,1≤t≤N}
B(s,t)={b1(s,t),b2(s,t),…,bn(s,t)}
Figure GDA0003389395690000074
(s′,t′)∈Np(s, t) and k is not less than 1 and not more than n
In the formula: b is a background model in the algorithm; b (s, t) is a background model of the location (s, t); n is a radical ofp(s, t) is the p neighborhood of location (s, t).
Selecting any frame in the video stream information, and performing front/background classification on any pixel point of any frame, as shown in fig. 3, specifically:
and 2.1, evaluating the dynamic degree of the background. For non-initial frame video image Ii(i ≧ 1), a recursive minimum distance D is first definedi(s, t) and a scintillation value gi(s, t). The recursive minimum distance is defined by acquiring the motion entropy state quantity of each pixel point on the nearest time window according to the thought of background dynamics. The flicker degree value of each pixel point is switched between the background and the foreground frequently according to the pixel point belonging to the dynamic backgroundThe characteristics of (2) are defined. Specifically, the following formula:
Figure GDA0003389395690000075
Figure GDA0003389395690000076
wherein the content of the first and second substances,
Figure GDA0003389395690000077
is the pixel value I of the current frame position (s, t)i(s, t) the minimum value of the distance in the two-dimensional Euclidean color space from its background model B (s, t). gincAnd gdecRespectively, are the increasing and decreasing coefficients of the flicker degree,
Figure GDA0003389395690000078
representing an exclusive OR operation, Fi(s, t) indicates whether or not the pixel point at the ith frame position (s, t) belongs to the background.
Step 2.2, according to the flicker degree and the minimum distance of each pixel, the self-adaptive updating mode of the sampling distance threshold value is as follows:
Figure GDA0003389395690000081
step 2.3, in order to ensure the integrity of the foreground target in the detection result, the matching threshold # min (s, t) is adaptively updated for the foreground pixel point by using the minimum distance and the flicker degree value:
Figure GDA0003389395690000082
and 2.4, performing front background classification judgment on the pixel points at the position (s, t) according to the background model B (s, t) established at the position (s, t). Constructed with pixel values
Figure GDA0003389395690000083
Centered on the sampling distance threshold RiTwo-dimensional Euclidean color space with radius (s, t)
Figure GDA0003389395690000084
If the background model falls on
Figure GDA0003389395690000085
If the number of the samples in the pixel is less than the matching threshold # min (s, t), the current pixel is considered as the background, otherwise, the current pixel is considered as the foreground. The discrimination formula is as follows:
Figure GDA0003389395690000086
selecting any pixel point of any frame classified as the background, determining the pixel value of any pixel point, randomly replacing a sample corresponding to any position point in the background model by the pixel value of any pixel point according to the preset probability, and updating the pixel value of any pixel point to a sample library of the background model according to the preset probability, wherein the method specifically comprises the following steps:
and 3.1, adaptively updating the time sampling factor for each pixel point. In order to adapt to the characteristic of fast change of a dynamic region and solve the problem that pixel values of slowly moving foregrounds are influenced by a neighborhood propagation mechanism and are easily updated into a background so as to influence the accuracy of a background model, a self-adaptive updating time sampling factor of each pixel point is set:
Figure GDA0003389395690000087
in the formula, TmaxAnd TminThe up-regulation amplitude and the down-regulation amplitude are fixed values and are used for controlling the basic floating of the sampling factor;
step 3.2, the method is to classify the pixel points out of the position (s, T) as the background by 1/Ti(s,t)(Ti(s, t) is a temporal sub-sampling factor) and its corresponding pixel value
Figure GDA0003389395690000088
Randomly replace one sample in the background model B (s, T) while at 1/TiProbability of (s, t) pixel value
Figure GDA0003389395690000091
Update to its neighbor pixel Np(s, t) in the background model.
The method comprises the steps of obtaining pixel significant values of pixel values of any pixel point of any frame classified as a foreground and significant values of pixel values of a plurality of any position points of an updated background model sample library, subtracting a visual significant image of a current frame and a background model to obtain an absolute value so as to obtain significant difference values of each pixel point, if the foreground pixel points exist, enabling the significance degree to be larger than or equal to a current threshold value, then enabling the pixel points to be ghost pixel points, reclassifying the ghost pixel points, dividing the ghost pixel points into a background, and replacing the pixel values of the ghost pixel points with the pixel values of corresponding any position points in the background model, wherein as shown in FIG. 4, the method specifically comprises the following steps:
step 4.1, respectively solving visual saliency maps of a current frame and a background model by using an image pixel saliency value detection method based on histogram contrast
Figure GDA0003389395690000092
The significant difference value of each pixel point is further obtained by difference absolute value calculation
Figure GDA0003389395690000093
Figure GDA0003389395690000094
Step 4.2, carrying out foreground detection on the current frame by using a ViBe algorithm with threshold value self-adaptive updating, and establishing a significance degree function H for all pixel pointsi(s, t). H classified as foreground points in combination with significant difference valuesi(s, t) update:
Figure GDA0003389395690000095
step 4.3, defining a significance degree threshold c of the current frameiComprises the following steps:
Figure GDA0003389395690000096
wherein beta is a threshold adjustment parameter,
Figure GDA0003389395690000097
is the mean of the foreground pixel significant difference. The size of β is determined by the contrast of the ghost area to the background.
Step 4.4, ghost judgment is carried out on the foreground pixel points every other delta i frame, and H is carried out if the foreground pixel points exist and the significance degree is larger than or equal to the current threshold valuei(s,t)≥ciThen, the pixel (s, t) is considered as a ghost pixel. First, the ghost pixel point is reclassified as background, and the current pixel value is used
Figure GDA0003389395690000098
Randomly replacing n' samples in the background model, and adjusting a time sub-sampling factor:
n′=Ceil(#min(s,t))
Figure GDA0003389395690000101
in the formula, Ceil (g) represents the smallest integer which is a number larger than the number between parentheses.
Step 4.5, repeating steps 4.1 to 4.4 until no pixel point in the image meets Hi(s,t)≥ciI.e. the foreground object no longer contains ghost areas.
Through the 4 steps in the embodiment, accurate extraction of the parameter-adaptive ghost-removing foreground target can be finally realized.
The invention provides a system for extracting a foreground target by removing ghosts, which comprises the following steps of:
the sampling module 201 acquires video stream information, selects pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selects a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigns values to the sample library, and generates a background model;
the classification module 202 selects any frame in the video stream information, and performs foreground and background classification on any pixel point of any frame;
the updating module 203 selects any pixel point of any frame classified as the background, determines the pixel value of any pixel point, randomly replaces the sample of any corresponding position point in the background model with the pixel value of any pixel point according to the preset probability, and updates the pixel value of any pixel point to the sample library of the background model according to the preset probability;
the identification module 204 is configured to obtain a pixel significant value of a pixel value of any pixel point of any frame classified as a foreground and significant values of pixel values of a plurality of any position points of an updated background model sample library, perform subtraction on a visual significant graph of a current frame and a background model to obtain an absolute value of each pixel point to obtain a significant difference value of each pixel point, and if a foreground pixel point exists so that the significance degree is greater than or equal to a current threshold, remove a ghost pixel point and extract a foreground target;
and reclassifying the ghost pixel points, dividing the ghost pixel points into the background, and replacing the pixel values of the ghost pixel points with the pixel values of any corresponding position points in the background model.
The preset probability is 1/T, and T is an update time sampling factor;
the updating time sampling factor is obtained according to the background dynamics model and the flicker degree value;
and the updating time sampling factor is replaced after the pixel value of the ghost pixel point is replaced with the pixel value of the corresponding arbitrary position point in the background model.
Performing front/background classification on any pixel point of any frame, specifically:
and defining a two-dimensional color space which takes the pixel value of any pixel point of any frame as the center and the sampling distance threshold as the radius, and if the number of samples of the pixel value of the corresponding position point in the background model sample library falling in the two-dimensional color space is less than a preset matching threshold, taking any pixel point as the background, or else, taking the pixel value as the foreground.
The sampling distance threshold and the matching threshold;
updating according to the flicker degree of each pixel and the minimum distance between the pixel point and the center of the two-dimensional color space
. The dynamic degree of the pixel points is evaluated by introducing a dynamic model and the flicker degree, and the sampling distance threshold value and the matching threshold value are updated in a self-adaptive manner, so that the missing rate is reduced while the accuracy rate of foreground extraction is improved;
according to the invention, the time sub-sampling factors of each pixel point are updated in a self-adaptive manner through the dynamic degree, so that the accuracy of the background model in a complex scene is improved;
the frame statistics of the invention takes the significance value of each pixel position as the basis of ghost judgment, and updates the background model for the pixel points identified as ghosts according to the matching threshold, thereby achieving the purpose of rapidly eliminating the ghosts.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (8)

1. A method for foreground object extraction using de-ghosting, the method comprising:
acquiring video stream information, selecting pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selecting a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigning values to the sample library, and generating a background model;
selecting any frame in the video stream information, and carrying out foreground and background classification on any pixel point of any frame;
selecting any pixel point of any frame classified as the background, determining the pixel value of any pixel point, randomly replacing a sample of a corresponding any position point in the background model by the pixel value of any pixel point according to a preset probability, and updating the pixel value of any pixel point into a sample library of the background model according to the preset probability;
acquiring pixel significant values of pixel values of any pixel points of any frame classified as a foreground and significant values of pixel values of a plurality of any position points of an updated background model sample library;
respectively solving visual saliency maps of a current frame and a background model by using an image pixel saliency value detection method based on histogram contrast, obtaining a saliency value of each pixel point by carrying out difference absolute value calculation, carrying out foreground detection on the current frame by using a ViBe algorithm adaptively updated by a threshold value, and establishing a saliency function for all the pixel points
Figure 796370DEST_PATH_IMAGE001
Combining significant difference values to a significance level function classified as foreground points
Figure 223940DEST_PATH_IMAGE001
Updating:
Figure 551016DEST_PATH_IMAGE002
Figure 304208DEST_PATH_IMAGE003
indicating the ith frame position
Figure 919998DEST_PATH_IMAGE004
Whether the pixel point of (2) belongs to the background, wherein
Figure 213576DEST_PATH_IMAGE005
The significant difference of the pixel points;
defining a significance level threshold for a current frame
Figure 858140DEST_PATH_IMAGE006
Comprises the following steps:
Figure 251076DEST_PATH_IMAGE007
in the formula (I), the compound is shown in the specification,
Figure 619740DEST_PATH_IMAGE008
the parameters are adjusted for the purpose of the threshold value,
Figure 451430DEST_PATH_IMAGE009
is the mean of the significant differences of the foreground pixels,
Figure 221940DEST_PATH_IMAGE008
the size of (a) is determined by the contrast of the ghost area and the background;
every other
Figure 785776DEST_PATH_IMAGE010
And (4) carrying out ghost judgment on the foreground pixel points by the frame, if the foreground pixel points exist, and the significance degree is more than or equal to the current significance degree threshold, considering the pixel points as ghost pixel points, and removing the ghost pixel points to extract the foreground target.
2. The method of claim 1, further comprising:
and reclassifying the ghost pixel points, dividing the ghost pixel points into the background, and replacing the pixel values of the ghost pixel points with the pixel values of any corresponding position points in the background model.
3. The method of claim 1, wherein the preset probability is 1/T, T being an update time sampling factor;
the updating time sampling factor is obtained according to the background dynamics model and the flicker degree value;
and the updating time sampling factor is replaced after the pixel value of the ghost pixel point is replaced with the pixel value of the corresponding arbitrary position point in the background model.
4. The method according to claim 1, wherein the front/background classification of any pixel point of any frame is specifically:
and determining a two-dimensional color space which takes the pixel value of any pixel point of any frame as the center and the sampling distance threshold as the radius, and if the number of samples of the pixel value of the corresponding position point in the background model sample library falling in the two-dimensional color space is less than a preset matching threshold, taking any pixel point as the background, otherwise, taking the pixel value as the foreground.
5. A system for foreground object extraction using de-ghosting, the system comprising:
the sampling module is used for acquiring video stream information, selecting pixel values of a plurality of arbitrary position points of an initial frame image in the video stream information as a sample library, selecting a plurality of pixel values with medium probability in the neighborhood of the arbitrary position of the initial frame image, assigning values to the sample library and generating a background model;
the classification module selects any frame in the video stream information and classifies the foreground and the background of any pixel point of any frame;
the updating module selects any pixel point of any frame classified as the background, determines the pixel value of any pixel point, randomly replaces the sample of any position point in the background model by the pixel value of any pixel point according to the preset probability, and updates the pixel value of any pixel point to the sample library of the background model according to the preset probability;
the identification module is used for acquiring pixel significant values of pixel values of any pixel points of any frame classified as the foreground and significant values of pixel values of a plurality of any position points of the updated background model sample library;
using histogram basedThe method for detecting the pixel significance value of the image with the contrast ratio solves visual significance maps of a current frame and a background model respectively, obtains a significant difference value of each pixel point through difference absolute value calculation, performs foreground detection on the current frame by using a ViBe algorithm with threshold value self-adaptive updating, and establishes a significance degree function for all the pixel points
Figure 641737DEST_PATH_IMAGE001
Combining significant difference values to a significance level function classified as foreground points
Figure 277117DEST_PATH_IMAGE001
Updating:
Figure 105396DEST_PATH_IMAGE002
Figure 105713DEST_PATH_IMAGE003
indicating the ith frame position
Figure 183391DEST_PATH_IMAGE004
Whether the pixel point of (2) belongs to the background, wherein
Figure 622462DEST_PATH_IMAGE005
The significant difference of the pixel points;
defining a significance level threshold for a current frame
Figure 101985DEST_PATH_IMAGE006
Comprises the following steps:
Figure 273203DEST_PATH_IMAGE007
in the formula (I), the compound is shown in the specification,
Figure 838177DEST_PATH_IMAGE008
the parameters are adjusted for the purpose of the threshold value,
Figure DEST_PATH_IMAGE011
is the mean of the significant differences of the foreground pixels,
Figure 18623DEST_PATH_IMAGE008
the size of (a) is determined by the contrast of the ghost area and the background;
every other
Figure 352652DEST_PATH_IMAGE010
And (4) carrying out ghost judgment on the foreground pixel points by the frame, if the foreground pixel points exist, and the significance degree is more than or equal to the current significance degree threshold, considering the pixel points as ghost pixel points, and removing the ghost pixel points to extract the foreground target.
6. The system of claim 5, the identification module further to:
and reclassifying the ghost pixel points, dividing the ghost pixel points into the background, and replacing the pixel values of the ghost pixel points with the pixel values of any corresponding position points in the background model.
7. The system of claim 5, wherein the preset probability is 1/T, T being an update time sampling factor;
the updating time sampling factor is obtained according to the background dynamics model and the flicker degree value;
and the updating time sampling factor is replaced after the pixel value of the ghost pixel point is replaced with the pixel value of the corresponding arbitrary position point in the background model.
8. The system according to claim 5, wherein the front/background classification of any pixel point of any frame is specifically:
and defining a two-dimensional color space which takes the pixel value of any pixel point of any frame as the center and the sampling distance threshold as the radius, and if the number of samples of the pixel value of the corresponding position point in the background model sample library falling in the two-dimensional color space is less than a preset matching threshold, taking any pixel point as the background, or else, taking the pixel value as the foreground.
CN201911181481.XA 2019-11-27 2019-11-27 Method and system for extracting foreground target by removing ghost Active CN111062974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911181481.XA CN111062974B (en) 2019-11-27 2019-11-27 Method and system for extracting foreground target by removing ghost

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911181481.XA CN111062974B (en) 2019-11-27 2019-11-27 Method and system for extracting foreground target by removing ghost

Publications (2)

Publication Number Publication Date
CN111062974A CN111062974A (en) 2020-04-24
CN111062974B true CN111062974B (en) 2022-02-01

Family

ID=70298683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911181481.XA Active CN111062974B (en) 2019-11-27 2019-11-27 Method and system for extracting foreground target by removing ghost

Country Status (1)

Country Link
CN (1) CN111062974B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633369B (en) * 2020-12-21 2023-04-07 浙江大华技术股份有限公司 Image matching method and device, electronic equipment and computer-readable storage medium
CN112634319A (en) * 2020-12-28 2021-04-09 平安科技(深圳)有限公司 Video background and foreground separation method and system, electronic device and storage medium
CN112819854B (en) * 2021-02-02 2023-06-13 歌尔光学科技有限公司 Ghost detection method, ghost detection device, and readable storage medium
CN113808154A (en) * 2021-08-02 2021-12-17 惠州Tcl移动通信有限公司 Video image processing method and device, terminal equipment and storage medium
CN113780110A (en) * 2021-08-25 2021-12-10 中国电子科技集团公司第三研究所 Method and device for detecting weak and small targets in image sequence in real time
CN114359268A (en) * 2022-03-01 2022-04-15 杭州晨鹰军泰科技有限公司 Foreground detection method and system
CN114578316B (en) * 2022-04-29 2022-07-29 北京一径科技有限公司 Method, device and equipment for determining ghost points in point cloud and storage medium
CN115169387B (en) * 2022-06-20 2023-07-18 北京融合未来技术有限公司 Method and device for detecting prospect of pulse signal, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683062A (en) * 2017-01-10 2017-05-17 厦门大学 Method of checking the moving target on the basis of ViBe under a stationary camera
CN107169997A (en) * 2017-05-31 2017-09-15 上海大学 Background subtraction algorithm under towards night-environment
CN108446630A (en) * 2018-03-20 2018-08-24 平安科技(深圳)有限公司 Airfield runway intelligent control method, application server and computer storage media
CN110060278A (en) * 2019-04-22 2019-07-26 新疆大学 The detection method and device of moving target based on background subtraction
CN110503664A (en) * 2019-08-07 2019-11-26 江苏大学 One kind being based on improved local auto-adaptive sensitivity background modeling method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373320B2 (en) * 2017-03-17 2019-08-06 Uurmi Systems PVT, LTD Method for detecting moving objects in a video having non-stationary background

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683062A (en) * 2017-01-10 2017-05-17 厦门大学 Method of checking the moving target on the basis of ViBe under a stationary camera
CN107169997A (en) * 2017-05-31 2017-09-15 上海大学 Background subtraction algorithm under towards night-environment
CN108446630A (en) * 2018-03-20 2018-08-24 平安科技(深圳)有限公司 Airfield runway intelligent control method, application server and computer storage media
CN110060278A (en) * 2019-04-22 2019-07-26 新疆大学 The detection method and device of moving target based on background subtraction
CN110503664A (en) * 2019-08-07 2019-11-26 江苏大学 One kind being based on improved local auto-adaptive sensitivity background modeling method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An Improved ViBe Algorithm Based on Visual saliency;Peng Li et.al;《2017 International Conference on Computer Technology, Electronics and Communication (ICCTEC)》;20171231;第603-607页 *
SuBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity;Pierre-Luc St-Charles et.al;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20150131;第24卷(第1期);第359-373页 *
参数自适应去阴影前景目标检测算法;金淼 等;《华中科技大学学报(自然科学版)》;20210131;第49卷(第1期);第73-79页 *
基于改进视觉背景提取的运动目标检测算法;莫邵文 等;《光学学报》;20160630;第36卷(第6期);第1-10页 *

Also Published As

Publication number Publication date
CN111062974A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN111062974B (en) Method and system for extracting foreground target by removing ghost
JP6509275B2 (en) Method and apparatus for updating a background model used for image background subtraction
CN108875676B (en) Living body detection method, device and system
Parks et al. Evaluation of background subtraction algorithms with post-processing
CN109272509B (en) Target detection method, device and equipment for continuous images and storage medium
Haines et al. Background subtraction with dirichlet processes
CN110599523A (en) ViBe ghost suppression method fused with interframe difference method
CN108805897B (en) Improved moving target detection VIBE method
KR101891225B1 (en) Method and apparatus for updating a background model
JP2006209755A (en) Method for tracing moving object inside frame sequence acquired from scene
KR102107334B1 (en) Method, device and system for determining whether pixel positions in an image frame belong to a background or a foreground
CN105741319B (en) Improvement visual background extracting method based on blindly more new strategy and foreground model
WO2019197021A1 (en) Device and method for instance-level segmentation of an image
Liang et al. Deep background subtraction with guided learning
CN112927262A (en) Camera lens shielding detection method and system based on video
KR101690050B1 (en) Intelligent video security system
CN113379789B (en) Moving target tracking method in complex environment
CN111931754B (en) Method and system for identifying target object in sample and readable storage medium
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
Geng et al. Real time foreground-background segmentation using two-layer codebook model
Wang et al. AMBER: Adapting multi-resolution background extractor
CN111667419A (en) Moving target ghost eliminating method and system based on Vibe algorithm
Huynh-The et al. Locally statistical dual-mode background subtraction approach
Tang et al. Fast background subtraction using improved GMM and graph cut
CN113255549B (en) Intelligent recognition method and system for behavior state of wolf-swarm hunting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant