CN111784723A - Foreground extraction algorithm based on confidence weighted fusion and visual attention - Google Patents

Foreground extraction algorithm based on confidence weighted fusion and visual attention Download PDF

Info

Publication number
CN111784723A
CN111784723A CN202010111277.7A CN202010111277A CN111784723A CN 111784723 A CN111784723 A CN 111784723A CN 202010111277 A CN202010111277 A CN 202010111277A CN 111784723 A CN111784723 A CN 111784723A
Authority
CN
China
Prior art keywords
confidence
texture
color
foreground
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010111277.7A
Other languages
Chinese (zh)
Inventor
成科扬
孙爽
荣兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010111277.7A priority Critical patent/CN111784723A/en
Publication of CN111784723A publication Critical patent/CN111784723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a foreground extraction algorithm based on confidence weighted fusion and visual attention. Confidence is first given to the sample color and texture values in the background model. Respectively counting the sum of the color level distance and the texture distance between the current pixel and the sample value and the sample confidence coefficient of which the texture distance is less than or equal to the current frame distance threshold value during classification; and then different weights are given to the sum of the two confidences for addition, when the sum is more than or equal to the judgment value, the current pixel point is a background, and otherwise, the current pixel point is a foreground. Then updating the confidence coefficient and the weight value in a self-adaptive manner; secondly, averagely dividing the video sequence into M subsequences, taking the foreground detected by the last frame in the subsequences as an interested region R for static foreground detection, and calculating the color significance and texture similarity of the R region. The R region is cyclically detected as stationary foreground until the last frame of the sub-sequence or as background. The algorithm disclosed by the invention can effectively overcome the problem of color camouflage and has better robustness for detecting the static foreground.

Description

Foreground extraction algorithm based on confidence weighted fusion and visual attention
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to foreground detection, which can be applied to intelligent security video monitoring in public occasions such as schools, squares and the like.
Background
The general steps of foreground detection based on background modeling algorithm are to compare the current frame data information with the background model to extract the foreground object, and then to update the background model. The difficulty of background modeling is how to overcome the problems of color camouflage, sudden stillness of a moving target and the like and extract a complete foreground target. The currently proposed algorithm comprises a method based on pixel points and region levels and a background modeling method based on color information and texture characteristics, and the methods have specific advantages and guarantee real-time performance, but most of the methods cannot overcome the problems of color camouflage, sudden object standstill and the like.
In the region-level modeling, liu cui wei et al proposed a learning method using online subspace for model updating, and in 2015, Beaugendre et al proposed a background modeling method of adaptive region propagation. Maity et al used block statistical feature extraction techniques for detecting foreground in 2017. These methods all have the disadvantage of region-level modeling, i.e. accurate foreground and contour cannot be obtained, and thus the effect is not good.
After Olivier Barnich et al put forward a background subtraction method (ViBe) based on pixel points in 2009, a background modeling method based on pixel points is greatly developed, and problems caused by region-level modeling can be effectively solved. In 2014, Pierre-Luc St-Charles et al proposed a foreground detection algorithm based on pixel values and LBSP texture features. Then, local adaptive sensitivity segmentation (SuBSENSE) based algorithms have been proposed. Such algorithms have a good effect on foreground detection in a general scene, but are poor in color camouflage and static target foreground problems, because such methods firstly perform color level foreground judgment, if color camouflage occurs, the color camouflage cannot be detected, and when the target is in a static state, for example, a forgotten object and a person who has a nap, the target can be quickly updated to the background due to space diffusion, random updating and other strategies. Therefore, the foreground detection algorithm for the color camouflage and the static target is significant.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems that a background subtraction method cannot overcome the color camouflage problem and extract a complete foreground and cannot detect a static foreground target, the invention provides a foreground extraction algorithm based on confidence weighting fusion and visual attention.
The technical scheme is as follows: the invention provides a foreground extraction algorithm based on confidence weighted fusion and visual attention, which comprises the following steps:
(1) initializing a background model in the previous N frames;
(2) detecting a moving foreground target by adopting a color and texture confidence self-adaptive weighting fusion mode;
(3) updating the confidence coefficient and the weight of the color dimension and the texture dimension of the sample;
(4) constructing a visual attention mechanism to detect a short-time static foreground, and correcting and fusing foreground detection results;
(5) and guiding to update the background model according to the foreground detection result.
Further, in the step (1), a background model b (x) is established by collecting the pixel information of the previous N frames, and the model is composed of N samples and has the following structure:
B(x)={B1(x),B2(x),...,Bi(x),...,BN(x)}
wherein, sample Bi(x) From colour values viLBSP texture characteristic value LBSPi(x) Color dimension confidence Ci 1(x) And texture dimension confidence
Figure BDA0002390096060000021
Is composed of, i.e.
Figure BDA0002390096060000022
Further, in the step (2), the current pixel I is firstly determinedt(x) Marking samples with the distance from the sample in the model smaller than a given distance threshold value R (x) as strong correlation samples, obtaining the number of the samples and marking the samples as n, and marking color confidence degrees corresponding to the strong correlation samples
Figure BDA0002390096060000023
And texture confidence
Figure BDA0002390096060000024
Are respectively marked as
Figure BDA0002390096060000025
And
Figure BDA0002390096060000026
namely:
Figure BDA0002390096060000027
wherein, m takes a value of 1 or 2, corresponding to the color dimension and the texture dimension. The Euclidean distance is used for color dimension judgment, and the Hamming distance is used for texture dimension judgment.
And then, respectively summing the color confidence coefficient and the texture confidence coefficient of the strong correlation sample, then weighting and summing the color confidence coefficient and the texture confidence coefficient, if the color confidence coefficient and the texture confidence coefficient are smaller than a minimum threshold value min, judging the sample to be a foreground, otherwise, judging the sample to be a background, namely:
Figure BDA0002390096060000028
further, in the step (3), the update strategy of the sample confidence and the confidence weight includes:
(1) for the pixels detected as the background, the sample template with the minimum confidence level in the model is replaced by the current pixel information, so that the new sample is not updated rapidly, the new sample is introduced into the model to adapt to the background change, and 1 is added to the confidence value of the sample. To ensure the stability of the model, the objectAll sample template confidence of the primitive is subtracted
Figure BDA0002390096060000029
(2) When t isi(x) When the distance between the current frame pixel and the sample in the model is greater than a given distance threshold, the sample is valid at the time, then the confidence of the valid sample is increased, the confidence of the invalid sample is decreased, and the color dimension and texture dimension confidence updating mode specifically comprises the following steps:
Figure BDA0002390096060000031
wherein, m takes values of 1 and 2, corresponding to the color dimension and the texture dimension, gamma is the number of effective samples, and N is the total number of samples. And the confidence of the effective samples is properly increased, and the confidence of the ineffective samples is reduced, so that the sample confidence is reasonably distributed.
(3) Color weight lambda1(x) And texture weightλThe 2(x) update strategy is as follows:
Figure BDA0002390096060000032
Figure BDA0002390096060000033
i.e. when the sum of the color confidences exceeds the threshold T, at a greater update level
Figure BDA0002390096060000034
Updating the color weight λ1(x) Otherwise, the confidence level is small
Figure BDA0002390096060000035
Updating the weight lambda1(x) (ii) a When the texture confidence exceeds the distance threshold T, the level of update is larger
Figure BDA0002390096060000036
Updating texture weight lambda2(x) Otherwise, theLower confidence level
Figure BDA0002390096060000037
Updating the weight lambda2(x)。
Further, in the step (4), constructing a visual attention mechanism to detect the stationary foreground specifically includes the following steps:
(1) after the self-model is initialized, the video sequence is divided into M subsequences, which are marked as VcC ∈ {1, 2.., M }, wherein each subsequence VcThe number of frames is n frames. Memory subsequence VcIs the first frame of vcThen its previous frame is uc-1
(2) The previous frame u in the current subsequencec-1As the current frame vcIf the current frame does not have the region R, the following operations are not carried out; otherwise, calculating color significance detection SV of the region Rc(R) and the texture similarity K of the background and the current frame in the region Rc(R) and proceeding to step (3).
(3) It is checked by a visual attention mechanism whether a region is as appealing as a foreground object. Defining a visual attention mechanism to judge whether the image is background or not based on color saliency SV (R) and background and texture similarity K (R), namely:
Figure BDA0002390096060000038
wherein, Pc(R) is the possibility that the R region in the current frame is the background and the color significance SV of the R region in the current framec(R) is inversely proportional to the texture similarity K of the R region in the current frame and the backgroundc(R) is proportional. PB(R) is a threshold value with the R region in the current frame as the background and the significance SV of the R region in the backgroundB(R) is inversely proportional.
(4) If P isB(R)>Pc(R), the current region R is a static foreground, otherwise, the region R is considered as a background;
(5) if the current frame interesting region R is judged as a static foreground target, thenEntering the next frame to circulate the above operation, otherwise ending the current subsequence VcStatic foreground detection operation.
Further, in the step (5), the manner of guiding updating the background model is to refuse to update the sample in the background model by using the pixel points of the dynamic foreground and static foreground target regions detected in the steps (2) and (4), and guide updating the sample of the background model by using the region pixel information detected as the background.
The invention has the beneficial effects that:
the foreground detection process of color dimensionality and texture dimensionality is fused based on a foreground extraction algorithm of confidence weighted fusion and visual attention, and the problem of foreground omission caused by color camouflage is solved to a great extent; in addition, static targets such as short-time stay can be extracted by adopting visual attention and significance detection, and missing detection is avoided as much as possible, so that the prospect is more complete and accurate.
Drawings
FIG. 1 is a schematic diagram of a core structure of a foreground extraction algorithm based on confidence weighted fusion and visual attention according to the present invention.
Fig. 2 is a flow chart of the static foreground detection algorithm based on the visual attention mechanism according to the present invention.
Detailed Description
The invention will be further explained with reference to the drawings.
As shown in fig. 1, the foreground extraction algorithm based on confidence weighted fusion and visual attention mainly includes a pixel classification method based on confidence weighted fusion, confidence and weight updating, and a short-term stationary foreground detection method based on visual attention. The method of carrying out the invention is explained in detail below in relation to these several aspects.
Model initialization: firstly, a background model B (x) is established by acquiring the pixel information of the previous N frames, wherein the model consists of N samples and has the following structure:
B(x)={B1(x),B2(x),...,Bi(x),...,BN(x)}
wherein, sample Bi(x) From colour values vi、LBSP texture characteristic value LBSPi(x) Color dimension confidence
Figure BDA0002390096060000041
And texture dimension confidence
Figure BDA0002390096060000042
The composition is as follows:
Figure BDA0002390096060000043
and (3) pixel classification: the current pixel It(x) Marking samples with the distance from the sample in the model smaller than a given distance threshold value R (x) as strong correlation samples, obtaining the number of the samples and marking the samples as n, and marking color confidence degrees corresponding to the strong correlation samples
Figure BDA0002390096060000044
And texture confidence
Figure BDA0002390096060000045
Are respectively marked as
Figure BDA0002390096060000046
And
Figure BDA0002390096060000047
namely:
Figure BDA0002390096060000048
wherein, m takes a value of 1 or 2, corresponding to the color dimension and the texture dimension. The Euclidean distance is used for color dimension judgment, and the Hamming distance is used for texture dimension judgment.
And then, respectively summing the color confidence coefficient and the texture confidence coefficient of the strong correlation sample, then weighting and summing the color confidence coefficient and the texture confidence coefficient, if the color confidence coefficient and the texture confidence coefficient are smaller than a minimum threshold value min, judging the sample to be a foreground, otherwise, judging the sample to be a background, namely:
Figure BDA0002390096060000051
the updating strategy of the sample confidence coefficient and the confidence coefficient weight comprises the following steps:
(1) for the pixels detected as the background, the sample template with the minimum confidence level in the model is replaced by the current pixel information, so that the new sample is not updated rapidly, the new sample is introduced into the model to adapt to the background change, and 1 is added to the confidence value of the sample. To ensure model stability, all sample template confidence for a pixel is subtracted
Figure BDA0002390096060000052
(2) When t isi(x) When the distance between the current frame pixel and the sample in the model is greater than a given distance threshold, the sample is valid at the time, then the confidence of the valid sample is increased, the confidence of the invalid sample is decreased, and the color dimension and texture dimension confidence updating mode specifically comprises the following steps:
Figure BDA0002390096060000053
wherein, m takes values of 1 and 2, corresponding to the color dimension and the texture dimension, gamma is the number of effective samples, and N is the total number of samples.
(3) Color weight lambda1(x) And texture weight lambda2(x) The update strategy is as follows:
Figure BDA0002390096060000054
Figure BDA0002390096060000055
i.e. when the sum of the color confidences exceeds the threshold T, at a greater update level
Figure BDA0002390096060000056
Updating the color weight λ1(x) Otherwise, the confidence level is small
Figure BDA0002390096060000057
Updating the weight lambda1(x) (ii) a When the sum of the confidence of the textures exceeds a distance threshold T, the level of updating is larger
Figure BDA0002390096060000058
Updating texture weight lambda2(x) Otherwise with a lesser confidence level
Figure BDA0002390096060000059
Updating the weight lambda2(x)。
As shown in fig. 2, the method for constructing a visual attention mechanism to detect a stationary foreground according to the present invention specifically includes the following steps:
(1) after the self-model is initialized, the video sequence is divided into M subsequences, which are marked as VcC ∈ {1, 2.., M }, wherein each subsequence VcThe number of frames is m frames. Memory subsequence VcIs v, is a current framecThen its previous frame is uc-1
(2) The previous frame u in the current subsequencec-1As the current frame vcIf the region R does not exist in the current frame, the following operations are not performed; otherwise, calculating color significance detection SV of the region Rc(R) and the texture similarity K of the background and the current frame in the region Rc(R), and then proceeding to step (4-3).
(3) It is checked by a visual attention mechanism whether a region is as appealing as a foreground object. Defining a visual attention mechanism to judge whether the image is background or not based on color saliency SV (R) and background and texture similarity K (R), namely:
Figure BDA0002390096060000061
wherein, Pc(R) is the possibility that the R region in the current frame is the background and the color significance SV of the R region in the current framec(R) is inversely proportional to the texture similarity K of the R region in the current frame and the backgroundc(R) is proportional. PB(R) is the R region in the current frameThreshold with region as background, and significant SV of R region in backgroundB(R) is inversely proportional.
(4) If P isB(R)>Pc(R), the current region R is a static foreground, otherwise, the region R is considered as a background;
(5) if the current frame interesting region R is judged as a static foreground target and the current frame is not the last frame of the current subsequence, the next frame is entered and the step (4-2) is circulated, otherwise, the current subsequence V is endedcStatic foreground detection operation.
Wherein SV is color-significantc(R) is calculated as follows:
let the foreground region of the object of interest be R, the set of pixels at the edge of the foreground region be l, S is a rectangular region centered at R, and its height and width are 2 times that of the region R. N is a radical oflRepresenting a 7 x 7 block of pixels with l as the center. D is the center-peripheral histogram difference value of the regions R and Sr(R, S) represents. Saliency d of R region edge pixel set ll(R, S) is represented by
Figure BDA0002390096060000062
Wherein the content of the first and second substances,
Figure BDA0002390096060000063
is the average value of the pixels where the pixel block centered at l intersects the region R,
Figure BDA0002390096060000064
is the average of the pixels where the pixel block centered at l intersects the region S. Finally, the color significance SV is defined by combining the two central difference valuesc(R)=dr(R,S)×dl(R,S)
And respectively calculating LBSP values of each pixel point in the R region in the background model and the current frame, then counting the histogram characteristics, and generating LBSP characteristic vectors for describing image textures. Let X be (X)1,x2,...,xn) And Y ═ Y1,y2,...,yn) Respectively representing model and current frame R regionThe image texture LBSP feature vector of (1). And measuring the similarity through the cosine value of the included angle between the two characteristic vectors. The texture difference between the background and current frame R regions is denoted as k (R), i.e.:
Figure BDA0002390096060000065
and finally, refusing to use the pixel points of the detected dynamic foreground and static foreground target regions to update the sample in the background model, and using the region pixel information detected as the background to guide the update of the sample of the background model.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (6)

1. A foreground extraction algorithm based on confidence weighted fusion and visual attention is characterized by comprising the following steps:
(1) initializing a background model in the previous N frames;
(2) detecting a moving foreground target by adopting a color and texture confidence self-adaptive weighting fusion mode;
(3) updating the confidence coefficient and the weight of the color dimension and the texture dimension of the sample;
(4) constructing a visual attention mechanism to detect a short-time static foreground, and correcting and fusing foreground detection results;
(5) and guiding to update the background model according to the foreground detection result.
2. The foreground extraction algorithm based on confidence weighted fusion and visual attention of claim 1, wherein in the step (1), a background model b (x) is established by obtaining the pixel information of the previous N frames, and the model is composed of N samples and has the following structure:
B(x)={B1(x),B2(x),...,Bi(x),...,BN(x)}
wherein, sample Bi(x) From colour values viLBSP texture characteristic value LBSPi(x) Color dimension confidence
Figure FDA0002390096050000011
And texture dimension confidence
Figure FDA0002390096050000012
The composition is as follows:
Figure FDA0002390096050000013
3. the foreground extraction algorithm based on confidence weighted fusion and visual attention of claim 1, wherein in the step (2), the detection mode of color and texture confidence adaptive weighted fusion specifically comprises the following steps:
(2-1) adding the current pixel It(x) Marking samples with the distance from the sample in the model smaller than a given distance threshold value R (x) as strong correlation samples, obtaining the number of the samples and marking the samples as n, and marking color confidence degrees corresponding to the strong correlation samples
Figure FDA0002390096050000014
And texture confidence
Figure FDA0002390096050000015
Are respectively marked as
Figure FDA0002390096050000016
And
Figure FDA0002390096050000017
namely:
Figure FDA0002390096050000018
wherein, m takes a value of 1 or 2, corresponding to the color dimension and the texture dimension. The Euclidean distance is used for color dimension judgment, and the Hamming distance is used for texture dimension judgment.
(2-2) respectively summing the color confidence coefficient and the texture confidence coefficient of the sample with strong correlation, then weighting and summing the color confidence coefficient and the texture confidence coefficient, if the color confidence coefficient and the texture confidence coefficient are smaller than a minimum threshold # min, judging the sample to be a foreground, otherwise, judging the sample to be a background, namely:
Figure FDA0002390096050000019
4. the foreground extraction algorithm based on confidence weighted fusion and visual attention of claim 1, wherein the update strategy of the sample confidence and confidence weight in the step (3) comprises:
(3-1) for the pixel detected as the background, replacing the sample template with the minimum confidence level in the model by the current pixel information, introducing a new sample in the model to adapt to the background change in order that the new sample is not updated rapidly, and adding 1 to the confidence value of the sample. To ensure model stability, all sample template confidence for a pixel is subtracted
Figure FDA0002390096050000021
(3-2) when t isi(x) When the distance between the current frame pixel and the sample in the model is greater than a given distance threshold, the sample is valid at the time, then the confidence of the valid sample is increased, the confidence of the invalid sample is decreased, and the color dimension and texture dimension confidence updating mode specifically comprises the following steps:
Figure FDA0002390096050000022
wherein, m takes values of 1 and 2, corresponding to the color dimension and the texture dimension, gamma is the number of effective samples, and N is the total number of samples.
(3-3) color weight λ1(x) And texture weight lambda2(x) The update strategy is as follows:
Figure FDA0002390096050000023
Figure FDA0002390096050000024
i.e. when the sum of the color confidences exceeds the threshold T, with a larger update level phimaxUpdating the color weight λ1(x) Otherwise, the small confidence level phiminUpdating the weight lambda1(x) (ii) a When the texture confidence exceeds the distance threshold T, with a larger update level phimaxUpdating texture weight lambda2(x) Otherwise with a small confidence level phiminUpdating the weight lambda2(x)。
5. The foreground extraction algorithm based on confidence weighted fusion and visual attention according to claim 1, wherein the step (4) of constructing a visual attention mechanism to detect stationary foreground comprises the following steps:
(4-1) after the self-model is initialized, dividing the video sequence into M subsequences, and marking as VcC ∈ {1, 2.., M }, wherein each subsequence VcThe number of frames is m frames. Memory subsequence VcIs v, is a current framecThen its previous frame is vc-1
(4-2) transmitting the previous frame v in the current subsequencec-1As the current frame vcIf the current frame does not have the region R, the following operations are not carried out; otherwise, calculating color significance detection SV of the region Rc(R) and the texture similarity K of the background and the current frame in the region Rc(R) and proceeding to step (4-3).
(4-3) defining a visual attention mechanism to judge whether the image is background or not based on color saliency SV (R) and background-to-texture similarity K (R), namely:
Figure FDA0002390096050000031
wherein, PC(R) is the possibility that the R region in the current frame is the background and the color significance SV of the R region in the current framec(R) is inversely proportional to the texture similarity K of the R region in the current frame and the backgroundc(R) is proportional. PB(R) is a threshold value with the R region in the current frame as the background and the significance SV of the R region in the backgroundB(R) is inversely proportional.
(4-4) if PB(R)>PC(R), the current region R is a static foreground, otherwise, the region R is considered as a background;
(4-5) if the current frame interesting region R is judged as a static foreground target and the current frame is not the last frame of the current subsequence, entering the next frame and then circulating the step (4-2), otherwise, ending the current subsequence VcStatic foreground detection operation.
6. The foreground extraction algorithm based on confidence weighted fusion and visual attention of claim 1, wherein the step (5) specifically comprises:
(5-1) the pixel area detected as the stationary foreground covers the area detected as the background in the step (2);
(5-2) refusing to use the pixel points of the dynamic foreground and static foreground target areas detected in the step (2) and the step (4) to update the sample in the background model;
(5-3) using the region pixel information detected as background to guide updating of the sample of the background model.
CN202010111277.7A 2020-02-24 2020-02-24 Foreground extraction algorithm based on confidence weighted fusion and visual attention Pending CN111784723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010111277.7A CN111784723A (en) 2020-02-24 2020-02-24 Foreground extraction algorithm based on confidence weighted fusion and visual attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010111277.7A CN111784723A (en) 2020-02-24 2020-02-24 Foreground extraction algorithm based on confidence weighted fusion and visual attention

Publications (1)

Publication Number Publication Date
CN111784723A true CN111784723A (en) 2020-10-16

Family

ID=72753080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010111277.7A Pending CN111784723A (en) 2020-02-24 2020-02-24 Foreground extraction algorithm based on confidence weighted fusion and visual attention

Country Status (1)

Country Link
CN (1) CN111784723A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409350A (en) * 2021-06-29 2021-09-17 广东工业大学 Method and related device for separating foreground and background of video
CN114567794A (en) * 2022-03-11 2022-05-31 浙江理工大学 Live video background replacement method
CN117428199A (en) * 2023-12-20 2024-01-23 兰州理工合金粉末有限责任公司 Alloy powder atomizing device and atomizing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012041418A1 (en) * 2010-10-01 2012-04-05 Telefonica, S.A. Method and system for real-time images foreground segmentation
CN106570885A (en) * 2016-11-10 2017-04-19 河海大学 Background modeling method based on brightness and texture fusion threshold value
CN107944499A (en) * 2017-12-10 2018-04-20 上海童慧科技股份有限公司 A kind of background detection method modeled at the same time for prospect background
US20190172212A1 (en) * 2017-12-06 2019-06-06 Blueprint Reality Inc. Multi-modal data fusion for scene segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012041418A1 (en) * 2010-10-01 2012-04-05 Telefonica, S.A. Method and system for real-time images foreground segmentation
CN106570885A (en) * 2016-11-10 2017-04-19 河海大学 Background modeling method based on brightness and texture fusion threshold value
US20190172212A1 (en) * 2017-12-06 2019-06-06 Blueprint Reality Inc. Multi-modal data fusion for scene segmentation
CN107944499A (en) * 2017-12-10 2018-04-20 上海童慧科技股份有限公司 A kind of background detection method modeled at the same time for prospect background

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BYEONGWOO KIM等: "Background Modeling Through Spatiotemporal Edge Feature and Color", 《LECTURE NOTES IN COMPUTER SCIENCE》, vol. 11845 *
万剑;洪明坚;赵晨丘;: "自适应邻域相关性的背景建模", 中国图象图形学报, no. 09 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409350A (en) * 2021-06-29 2021-09-17 广东工业大学 Method and related device for separating foreground and background of video
CN113409350B (en) * 2021-06-29 2022-05-31 广东工业大学 Method and related device for separating foreground and background of video
CN114567794A (en) * 2022-03-11 2022-05-31 浙江理工大学 Live video background replacement method
CN117428199A (en) * 2023-12-20 2024-01-23 兰州理工合金粉末有限责任公司 Alloy powder atomizing device and atomizing method
CN117428199B (en) * 2023-12-20 2024-03-26 兰州理工合金粉末有限责任公司 Alloy powder atomizing device and atomizing method

Similar Documents

Publication Publication Date Title
US7668338B2 (en) Person tracking method and apparatus using robot
CN111784723A (en) Foreground extraction algorithm based on confidence weighted fusion and visual attention
CN111368683B (en) Face image feature extraction method and face recognition method based on modular constraint CenterFace
CN110472081B (en) Shoe picture cross-domain retrieval method based on metric learning
CN110866455B (en) Pavement water body detection method
Shen et al. Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement
CN109903339B (en) Video group figure positioning detection method based on multi-dimensional fusion features
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN112528939A (en) Quality evaluation method and device for face image
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN105321188A (en) Foreground probability based target tracking method
CN113240829B (en) Intelligent gate passing detection method based on machine vision
CN111881775B (en) Real-time face recognition method and device
CN111626107B (en) Humanoid contour analysis and extraction method oriented to smart home scene
CN113076876A (en) Face spoofing detection method based on three-dimensional structure supervision and confidence weighting
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
US20230386023A1 (en) Method for detecting medical images, electronic device, and storage medium
CN110503061B (en) Multi-feature-fused multi-factor video occlusion area detection method and system
Xia et al. Moving vehicle detection with shadow elimination based on improved ViBe algorithm
Al-Bayati et al. Automatic thresholding techniques for optical images
Xia et al. Moving foreground detection based on spatio-temporal saliency
CN111191575A (en) Naked flame detection method and system based on flame jumping modeling
Jeong et al. Moving shadow detection using a combined geometric and color classification approach
CN111639641B (en) Method and device for acquiring clothing region not worn on human body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201016

WD01 Invention patent application deemed withdrawn after publication