CN112365519A - Foreground image extraction method - Google Patents

Foreground image extraction method Download PDF

Info

Publication number
CN112365519A
CN112365519A CN202011342217.2A CN202011342217A CN112365519A CN 112365519 A CN112365519 A CN 112365519A CN 202011342217 A CN202011342217 A CN 202011342217A CN 112365519 A CN112365519 A CN 112365519A
Authority
CN
China
Prior art keywords
foreground image
texture
background model
calculating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011342217.2A
Other languages
Chinese (zh)
Inventor
林凡
张秋镇
陈健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GCI Science and Technology Co Ltd
Original Assignee
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GCI Science and Technology Co Ltd filed Critical GCI Science and Technology Co Ltd
Priority to CN202011342217.2A priority Critical patent/CN112365519A/en
Publication of CN112365519A publication Critical patent/CN112365519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a foreground image extraction method, which comprises the following steps: processing an original vehicle background model based on a symmetrical background elimination method to obtain a first vehicle background model after extracting an information-free background; performing image segmentation processing on the first vehicle background model to obtain a foreground image to be processed; and carrying out shadow elimination processing on the foreground image to be processed based on the texture difference in the foreground image to be processed. According to the foreground image extraction method provided by the embodiment of the invention, the accuracy of foreground image extraction is improved through extraction of the non-information background, image segmentation and shadow elimination, so that the vehicle can be stably and accurately identified.

Description

Foreground image extraction method
Technical Field
The invention relates to the technical field of image processing, in particular to a foreground image extraction method.
Background
Nowadays, an image processing technology is widely applied to the fields of vehicle safety management, road traffic control and the like, visual information reference is provided for drivers and vehicle management and dispatching centers through high-definition video acquisition and effective detection and identification of vehicle images, and optimal management of vehicles is achieved. The background and the foreground are two relative concepts in image processing, and foreground images of moving vehicles are extracted through processing of a background model, which is important for post-processing such as object tracking, target classification, behavior understanding and the like.
However, the existing vehicle foreground image extraction method does not consider interference from noise, shadow and the like existing in the image, so that the extraction accuracy of the foreground image is low.
Disclosure of Invention
The invention provides a foreground image extraction method, which aims to solve the technical problem of low extraction accuracy of the existing foreground image and improve the extraction accuracy of the foreground image by processing a background model.
In order to solve the above technical problem, an embodiment of the present invention provides a foreground image extraction method, including:
processing an original vehicle background model based on a symmetrical background elimination method to obtain a first vehicle background model after extracting an information-free background;
performing image segmentation processing on the first vehicle background model to obtain a foreground image to be processed;
and carrying out shadow elimination processing on the foreground image to be processed based on the texture difference in the foreground image to be processed.
As one preferred scheme, the step of processing the original vehicle background model based on the symmetric background elimination method to obtain the first vehicle background model after extracting the background without information specifically comprises:
calculating each pixel point of the original vehicle background model based on an interframe space comparison method to obtain a moving area and a static area of the original vehicle background model;
calculating a threshold value corresponding to each pixel point in the motion area;
and updating the original vehicle background model by the threshold value to obtain the first vehicle background model after the non-information background is extracted.
As one of the preferable schemes, the step of calculating each pixel point of the original vehicle background model based on the interframe space comparison method specifically includes:
calculating according to a color distance formula among the pixels:
Dk=|R(k-1)d-Rkd|+|G(k-1)d-Gkd|+|B(k-1)d-Bkd|
r, G, B are color components red, green, and blue, and k and d are the frame numbers corresponding to the image.
As one of the preferable schemes, the step of calculating the threshold corresponding to each pixel point in the motion region specifically includes:
calculating the corresponding threshold value of each pixel point of the motion area according to the following formula:
Figure BDA0002797213550000021
wherein, TiIs a threshold value, C is a constant, riIs the proportion of the pixel points in the whole image.
As one preferred scheme, the step of performing image segmentation processing on the first vehicle background model to obtain a foreground image to be processed specifically includes:
constructing a graph cutting energy equation corresponding to the first vehicle background model and generating a field pixel;
calculating the connection line weight of the domain pixels;
and carrying out segmentation processing on the first vehicle background model according to the link weight so as to obtain a foreground image to be processed.
As one preferred scheme, the graph cut energy equation specifically includes:
Figure BDA0002797213550000022
wherein f isp、fqThe patches in which the pixels in the image are located, Dp(fp) Is the color distance of the patch, V is the vertex set of the base model, V{p,q}(fp,fq) The vertex set of the small block where the pixel point is located.
As one of the preferable schemes, the step of calculating the link weight of the domain pixel specifically includes:
calculating the link weight of the domain pixels by adopting the following formula:
Figure BDA0002797213550000031
wherein w is a connection line weight, p and q are pixel points in the image, dist is an Euclidean distance between the two pixel points, and gamma is a weight parameter.
As one of the preferable schemes, the step of performing the shadow elimination processing on the foreground image to be processed based on the texture difference in the foreground image to be processed specifically includes:
acquiring a texture vector corresponding to each pixel point in the foreground image to be processed;
calculating the texture distance between two adjacent texture vectors;
and according to the texture distance, carrying out shadow removing processing on the foreground image to be processed.
As one preferred scheme, the step of obtaining a texture vector corresponding to each pixel point in the foreground image to be processed specifically includes:
and acquiring a corresponding texture vector according to a texture vector value calculation formula:
Figure BDA0002797213550000032
wherein, let 3 × 3 points in the field p (x, y) be p (x + m, y + n), where m, n are ∈ {0,1,2}, and the texture information vector of the pixel point (x, y) be Tx,y
As one of the preferable schemes, the step of calculating the texture distance between two adjacent texture vectors specifically includes:
calculating the texture distance according to a texture distance calculation formula:
Figure BDA0002797213550000033
wherein ^ is XOR operation, Tx1,y1And Tx2,y2For the corresponding texture information vector, m, n ∈ {0,1,2 }.
Compared with the prior art, the method and the device have the advantages that firstly, the non-information background of the original vehicle background model is extracted, then the foreground image is obtained according to image segmentation, and finally the accurate foreground image is extracted by eliminating the shadow of the foreground image. The whole method is suitable for image processing in complex and changeable scenes such as uneven vehicle lighting, and can realize the construction of a non-information background in a vehicle background model, effectively reduce the related noise interference in the background model, and eliminate the shadow in the image, thereby improving the accuracy and robustness of foreground image extraction, rapidly identifying and detecting the vehicle, and further stably and accurately identifying the vehicle.
Drawings
FIG. 1 is a schematic flow chart of a foreground image extraction method in one embodiment of the present invention;
FIG. 2 is a schematic diagram of original image construction in one embodiment of the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present application, the terms "first", "second", "third", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first," "second," "third," etc. may explicitly or implicitly include one or more of the features. In the description of the present application, "a plurality" means two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
In the description of the present application, it is to be noted that, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention, as those skilled in the art will recognize the specific meaning of the terms used in the present application in a particular context.
An embodiment of the present invention provides a foreground image extraction method, and specifically, please refer to fig. 1, where fig. 1 is a schematic flow diagram of a foreground image extraction method in an embodiment of the present invention, which includes:
s1, processing the original vehicle background model based on a symmetrical background elimination method to obtain a first vehicle background model after extracting the background without information;
s2, carrying out image segmentation processing on the first vehicle background model to obtain a foreground image to be processed;
s3, based on the texture difference in the foreground image to be processed, shadow elimination processing is carried out on the foreground image to be processed.
It should be noted that, in the background modeling stage, the average value of each frame in the monitoring video is mainly calculated, and a more accurate background model is obtained by using the average value. A mean value elimination background method is adopted, and the method comprises two stages of background modeling and foreground identification.
And carrying out background elimination operation on the background model and the image frame to be checked, and carrying out binarization on the eliminated background image by using a threshold value to obtain a background eliminated background result.
Let the pixel value of the position (x, y) in the background model be Bx,yThe pixel value corresponding to the ith frame picture (x, y) is
Figure BDA0002797213550000051
The calculation formula from which the pixel mean value can be derived is
Figure BDA0002797213550000052
The pixel values used in the above formula may be data values obtained in different ways, such as based on multi-channel (R, G, B) vectors, or may be single-channel gray values. There are many algorithms for calculating the average value, and a sliding average value method and a line average value method are commonly used. Because the video sequence is input once, the advantages and disadvantages of different algorithms are comprehensively considered for algorithm selection. The invention adopts an on-line average value algorithm with the formula of
Figure BDA0002797213550000053
The above formula can show that the background model can be continuously updated in an iterative manner in the system, so that the ending condition of the background training can be flexibly selected and judged. The principle of the conventional mean background model is relatively simple, the advantages are obvious, the operation speed is very high, and only one addition operation is needed under general conditions. But because of the simplicity of the model, the processing capacity of the model for complex scenes is insufficient, and the expression is not accurate enough. When a large difference between the background and the foreground is encountered, the average value of the background calculated by the algorithm can deviate from an accurate value to a large extent. In order to obtain a relatively stable background model, the number of samples required to participate in the operation is very large, and the result is more accurate when the number of samples is larger. In order to improve the accuracy of the mean background model, the mean background model is corrected in an iterative mode. During the first iteration, a mean value model which is the same as the common mean value calculation can be adopted to obtain a result, after the mean value result is obtained, samples with larger difference values are screened and filtered, and then the mean value is subjected to secondary operation. After the above steps are iterated repeatedly, the average value which is more accurate and accords with the real background model can be obtained through gradual convergence.
Preferably, in one embodiment, the step of processing the original vehicle background model based on the symmetric background elimination method to obtain the first vehicle background model after extracting the background without information includes:
s11, calculating each pixel point of the original vehicle background model based on an interframe space comparison method to obtain a moving area and a static area of the original vehicle background model;
s12, calculating a threshold value corresponding to each pixel point in the motion area;
and S13, updating the original vehicle background model by the threshold value to obtain the first vehicle background model after the non-information background is extracted.
It should be noted that, since the automatic learning cannot be effectively constructed in the absence of the background learning frame without information, the first frame indicated by the symmetric background elimination method is often encountered in the case of the background frame without information in practical application. Therefore, in order to improve the automatic learning and construction functions of the algorithm in the background frame without information, the specific characteristics of the background without information are added in advance in the background elimination method based on the self-organizing neural network in order to replace the original algorithm to perform background modeling without information in the learning stage. The essence of reconstructing an informative background is to extract static regions from the video sequence separately and to update such regions into the background model. The method for extracting the moving area and the static area by using the symmetrical background elimination method is relative, the background without information can be quickly constructed, and the other areas are static areas as long as the moving area is successfully detected. When the background without information is established, the timeliness of the algorithm is considered fully, and the method can be relatively flexible in accuracy, so that the method for symmetrically eliminating the background based on the interframe space comparison method is determined and selected.
Preferably, in the above embodiment, the motion area in the image is calculated by using a symmetric algorithm, and the inter-frame space value d selected for symmetrically removing the background is different according to different scenes. If the motion situation in the scene is not severe enough, the value can be set to be a little larger, and 4 and 8 can be selected. However, in the case of traffic lane vehicle detection or the like, only a small inter-frame interval value such as 1 or 2 can be selected. The formula of the color distance between each pixel is as follows, and the purpose is to optimize the algorithm so that the algorithm can be directly operated in the R, G and B space.
Dk=|R(k-1)d-Rkd|+|G(k-1)d-Gkd|+|B(k-1)d-Bkd|
In the above equation, R, G, B represents the color components red, green, and blue, respectively, and the corresponding subscripts represent the frame numbers of the video and image.
Preferably, in the above embodiment, when symmetrically eliminating the background, a binarization judgment threshold is set to adapt to different scenes, and the algorithm obtains a threshold value through self-adaptation. Presetting a relatively small threshold value T0Setting the ratio of pixels in the whole image to be riSetting a new threshold value as TiFrom which it is possible to calculate
Figure BDA0002797213550000071
In the above formula, C is a constant, and the conventional value is 10. After calculation, when the ratio of the foreground exceeds 70% of the whole image, the binarization threshold value needs to be properly increased.
Preferably, the result obtained after the background is symmetrically eliminated is divided into M × N small blocks according to a small block division principle that ensures that at least more than 300 pixel points are required in each small block, so that the accuracy and efficiency of the algorithm can be ensured.
When each tile f (i, j) can be successfully updated, it indicates that the background without information has been successfully established. If the number of cameras is too small and the updating of the pixel points is unsuccessful, a learning frame number threshold value K needs to be set, if the value is exceeded and the updating of each small block is not met, the learning stage is forcibly ended, and a certain number of empty small blocks exist, so that the improvement needs to be carried out by using an averaging method. Setting the proportion of the moving pixel points (m, n) of the small blocks in the background model without information to the whole small blocks as beta (m, n), thereby obtaining
Figure BDA0002797213550000081
As a preferred embodiment, the step of performing image segmentation processing on the first vehicle background model to obtain a foreground image to be processed specifically includes:
s21, constructing a graph cutting energy equation corresponding to the first vehicle background model, and generating a domain pixel;
s22, calculating the link weight of the domain pixels;
and S23, performing segmentation processing on the first vehicle background model according to the link weight to obtain a foreground image to be processed.
It should be noted that most of the conventional background elimination methods perform foreground detection based on blocks or pixels by using temporal differences and spatial differences, but most of the conventional background models have many defects, and most of the conventional background models only focus on background model construction, and all states representing the background are intended to be summarized by using the constructed models. However, the scene is complicated and variable, and is difficult to truly cover in all. Especially, the complexity in a motion scene is increased sharply, which requires that the background model can almost adapt to all possible changes in the scene, the spatial information cannot be ignored, the detection result can be greatly affected by the drop shadow existing in the motion foreground, and the traditional background algorithm can be disabled due to camera shake. In order to realize the detection of a moving target in a complex environment, particularly the vehicle identification, the first task is to completely separate the foreground from the background, and perform image segmentation and processing by using a graph cutting algorithm. The graph cutting algorithm belongs to a static image segmentation algorithm, and for a static image, the static image has no time information, and various basic information of the image can be separated according to a standard. Specifically, referring to fig. 2, fig. 2 is a schematic diagram illustrating an original image construction in an embodiment of the present invention, where an original point P and q represent pixel points in an image, V ═ P { (S, T }, r is a subset of a virtual special point T, w is a weight, and a basic model is a weight
G=(V,E)
In the above formula, V and E represent the vertex set and the edge set of fig. 2, respectively.
As can be seen from FIG. 2, the origin P, q represents a pixel in the image, where { P, q }. epsilon. E.S, T represents a virtual special point, and the set of pixels is P, and thus there is a pixel
V=PU{S,T}
In addition, E is composed of another part of special edges T, and is { p, T }. epsilon.E.
E={U{q,q}∈C{p,q}}∪{Up∈e{S,p}}∪{Uq∈p{q,T}}
In the formula (8), C represents the domain, and the energy equation constructed by the graph cut is
Figure BDA0002797213550000091
Wherein f isp、fqThe patches in which the pixels in the image are located, Dp(fp) Is the color distance of the patch, V is the vertex set of the base model, V{p,q}(fp,fq) The vertex set of the small block where the pixel point is located.
The background elimination method can be relatively smoothly integrated by utilizing the algorithm, a rough foreground result is obtained through the traditional background elimination background process and then is subjected to smooth transition, the image is processed by adopting a graph cutting algorithm, and then the image is accurately segmented. And finally, the image is segmented, and a morphological image corrosion mode is adopted for segmentation, so that a lot of isolated noise can be removed on the premise of not influencing a foreground region.
Let a mark lq=-logP(lqQ) pixel point is P (l)qQ) probability, calculation of the end points and the weights of the pixels in the image as
Ws,q=-logP(lq,q)
Further, the Euclidean distance between the pixels is set as dist (·), and the weight parameter is gamma, so that the connection line weight of the field pixels can be calculated as
Figure BDA0002797213550000092
Wherein w is a connection line weight, p and q are pixel points in the image, dist is an Euclidean distance between the two pixel points, and gamma is a weight parameter.
The conventional method for eliminating the background generally requires that the camera is controlled in a static state, so that the adverse effect of image jitter on the final detection effect is avoided. After the foreground area is detected by the symmetrical background elimination method, the classification training of the support vector machine algorithm can be carried out, so that the dynamic change of a detection scene can be met by continuously carrying out foreground detection and classifier updating.
As a preferred embodiment, the step of performing the shadow elimination processing on the foreground image to be processed based on the texture difference in the foreground image to be processed specifically includes:
s31, acquiring a texture vector corresponding to each pixel point in the foreground image to be processed;
s32, calculating the texture distance between two adjacent texture vectors;
and S33, according to the texture distance, carrying out shadow removing processing on the foreground image to be processed.
It should be noted that the shaded areas are darker relative to the unshaded areas, but there is no significant color and texture information change. There is a clear correlation in shape and behavior between shadows and objects that produce them, and the production of shadows also indicates that objects illuminated by light cannot be transparent. The intensity of light and the position of the light source for different scenes cannot be known in advance, and the illumination situation in the scene becomes very complicated due to the effect of reflected light or multiple light sources and objects.
Let each pixel point Tx,yAll will correspond to the vector one by one, and a 3 × 3 field A is setx,yHas a mean value of ux,yThe calculation formula is
Figure BDA0002797213550000101
Let p (x + m, y + n) be 3 × 3 points in the field p (x, y), where m, n are ∈ {0,1,2}, and the texture information vector of the pixel point (x, y) is Tx,yFrom this, a calculation formula representing the value of the texture vector can be obtained as
Figure BDA0002797213550000102
And judging the texture difference between the two pixel points, wherein when the vector components of the two pixel points are the same, the two textures are completely consistent. However, since the two textures may be affected by noise, it is acceptable that a certain difference exists between the two textures, and it can be judged that the two textures are the same. The calculation formula of the texture distance between two pixel points is
Figure BDA0002797213550000111
In the above formula,. alpha.indicates the XOR operation, the left and right values of the operator are compared to judge whether the two texture vectors are consistent, and the comparison calculation shows that the two texture vectors have different deletion positions, the number of the two texture vectors which are different is the value of Differ, and Differ belongs to [0,9]
And calculating the distance between the background texture vector and the current texture vector, and if the calculated value is smaller than a preset value, determining that the current point and the background texture are consistent as the background, thereby improving the accuracy of foreground image extraction through shadow elimination.
According to the foreground image extraction method provided by the embodiment of the invention, firstly, the non-information background of an original vehicle background model is extracted, then the acquisition of a foreground image is realized according to image segmentation, and finally, the accurate foreground image is extracted by eliminating the shadow of the foreground image. The whole method is suitable for image processing in complex and changeable scenes such as uneven vehicle lighting, and can realize the construction of a non-information background in a vehicle background model, effectively reduce the related noise interference in the background model, and eliminate the shadow in the image, thereby improving the accuracy and robustness of foreground image extraction, rapidly identifying and detecting the vehicle, and further stably and accurately identifying the vehicle.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A foreground image extraction method is characterized by comprising the following steps:
processing an original vehicle background model based on a symmetrical background elimination method to obtain a first vehicle background model after extracting an information-free background;
performing image segmentation processing on the first vehicle background model to obtain a foreground image to be processed;
and carrying out shadow elimination processing on the foreground image to be processed based on the texture difference in the foreground image to be processed.
2. The foreground image extraction method of claim 1, wherein the step of processing the original vehicle background model based on a symmetric background elimination method to obtain the first vehicle background model after extracting the background without information is specifically:
calculating each pixel point of the original vehicle background model based on an interframe space comparison method to obtain a moving area and a static area of the original vehicle background model;
calculating a threshold value corresponding to each pixel point in the motion area;
and updating the original vehicle background model by the threshold value to obtain the first vehicle background model after the non-information background is extracted.
3. The foreground image extraction method of claim 2, wherein the step of calculating each pixel point of the original vehicle background model based on the inter-frame space comparison method specifically comprises:
calculating according to a color distance formula among the pixels:
Dk=|R(k-1)d-Rkd|+|G(k-1)d-Gkd|+|B(k-1)d-Bkd|
r, G, B are color components red, green, and blue, and k and d are the frame numbers corresponding to the image.
4. The foreground image extracting method of claim 2, wherein the step of calculating the threshold value corresponding to each pixel point of the motion area specifically comprises:
calculating the corresponding threshold value of each pixel point of the motion area according to the following formula:
Figure FDA0002797213540000021
wherein, TiIs a threshold value, C is a constant, riIs the proportion of the pixel points in the whole image.
5. The foreground image extraction method of claim 1, wherein the step of performing image segmentation processing on the first vehicle background model to obtain a foreground image to be processed specifically includes:
constructing a graph cutting energy equation corresponding to the first vehicle background model and generating a field pixel;
calculating the connection line weight of the domain pixels;
and carrying out segmentation processing on the first vehicle background model according to the link weight so as to obtain a foreground image to be processed.
6. The foreground image extraction method of claim 5, wherein the graph cut energy equation is specifically:
Figure FDA0002797213540000022
wherein f isp、fqThe patches in which the pixels in the image are located, Dp(fp) Is the color distance of the patch, V is the vertex set of the base model, V{p,q}(fp,fq) The vertex set of the small block where the pixel point is located.
7. The foreground image extracting method of claim 5, wherein the step of calculating the link weight of the domain pixel comprises:
calculating the link weight of the domain pixels by adopting the following formula:
Figure FDA0002797213540000023
wherein w is a connection line weight, p and q are pixel points in the image, dist is an Euclidean distance between the two pixel points, and gamma is a weight parameter.
8. The foreground image extracting method of claim 1, wherein the step of performing the shadow elimination processing on the foreground image to be processed based on the texture difference in the foreground image to be processed specifically comprises:
acquiring a texture vector corresponding to each pixel point in the foreground image to be processed;
calculating the texture distance between two adjacent texture vectors;
and according to the texture distance, carrying out shadow removing processing on the foreground image to be processed.
9. The foreground image extraction method of claim 8, wherein the step of obtaining the texture vector corresponding to each pixel point in the foreground image to be processed specifically comprises:
and acquiring a corresponding texture vector according to a texture vector value calculation formula:
Figure FDA0002797213540000031
wherein, let 3 × 3 points in the field p (x, y) be p (x + m, y + n), where m, n are ∈ {0,1,2}, and the texture information vector of the pixel point (x, y) be Tx,y
10. The foreground image extracting method of claim 8, wherein the step of calculating the texture distance between two adjacent texture vectors specifically comprises:
calculating the texture distance according to a texture distance calculation formula:
Figure FDA0002797213540000032
wherein ^ is XOR operation, Tx1,y1And Tx2,y2For the corresponding texture information vector, m, n ∈ {0,1,2 }.
CN202011342217.2A 2020-11-25 2020-11-25 Foreground image extraction method Pending CN112365519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011342217.2A CN112365519A (en) 2020-11-25 2020-11-25 Foreground image extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011342217.2A CN112365519A (en) 2020-11-25 2020-11-25 Foreground image extraction method

Publications (1)

Publication Number Publication Date
CN112365519A true CN112365519A (en) 2021-02-12

Family

ID=74533389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011342217.2A Pending CN112365519A (en) 2020-11-25 2020-11-25 Foreground image extraction method

Country Status (1)

Country Link
CN (1) CN112365519A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220949A (en) * 2017-05-27 2017-09-29 安徽大学 The self adaptive elimination method of moving vehicle shade in highway monitoring video
CN111161307A (en) * 2019-12-19 2020-05-15 深圳云天励飞技术有限公司 Image segmentation method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220949A (en) * 2017-05-27 2017-09-29 安徽大学 The self adaptive elimination method of moving vehicle shade in highway monitoring video
CN111161307A (en) * 2019-12-19 2020-05-15 深圳云天励飞技术有限公司 Image segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高玉潼 等: "基于对称差分算法的快速人脸运动图像分割方法", 《西南大学学报(自然科学版)》, vol. 42, no. 7, pages 184 - 193 *

Similar Documents

Publication Publication Date Title
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
JP4668921B2 (en) Object detection in images
US11700457B2 (en) Flicker mitigation via image signal processing
CN111062974B (en) Method and system for extracting foreground target by removing ghost
CN110610150B (en) Tracking method, device, computing equipment and medium of target moving object
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
CN111886600A (en) Device and method for instance level segmentation of image
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN104915940A (en) Alignment-based image denoising method and system
CN111310768B (en) Saliency target detection method based on robustness background prior and global information
CN103119625A (en) Video character separation method and device
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
CN103729828A (en) Video rain removing method
CN114881869A (en) Inspection video image preprocessing method
CN110807738A (en) Fuzzy image non-blind restoration method based on edge image block sharpening
CN114022823A (en) Shielding-driven pedestrian re-identification method and system and storable medium
CN112561946A (en) Dynamic target detection method
CN109766846B (en) Video-based self-adaptive multi-lane traffic flow detection method and system
CN111460964A (en) Moving target detection method under low-illumination condition of radio and television transmission machine room
CN109658441B (en) Foreground detection method and device based on depth information
CN109308709B (en) Vibe moving target detection algorithm based on image segmentation
CN106446832B (en) Video-based pedestrian real-time detection method
Jin et al. Fusing Canny operator with vibe algorithm for target detection
Xie et al. Robust vehicles extraction in a video-based intelligent transportation systems
CN112365519A (en) Foreground image extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination