CN112991293A - Fast self-adaptive real-time color background extraction method - Google Patents

Fast self-adaptive real-time color background extraction method Download PDF

Info

Publication number
CN112991293A
CN112991293A CN202110269289.7A CN202110269289A CN112991293A CN 112991293 A CN112991293 A CN 112991293A CN 202110269289 A CN202110269289 A CN 202110269289A CN 112991293 A CN112991293 A CN 112991293A
Authority
CN
China
Prior art keywords
color background
image
initial color
real
current image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110269289.7A
Other languages
Chinese (zh)
Other versions
CN112991293B (en
Inventor
胡伍生
谌越
彭震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110269289.7A priority Critical patent/CN112991293B/en
Publication of CN112991293A publication Critical patent/CN112991293A/en
Application granted granted Critical
Publication of CN112991293B publication Critical patent/CN112991293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fast self-adaptive real-time color background extraction method, which comprises the following steps: loading a current image based on a monitoring video acquired in real time, storing the current image into an image library, and updating the number of images in the image library; setting the image sequence number of the current image as the number of images contained in the updated image library; updating an image library for extracting an initial color background; extracting and updating an initial color background based on the updated initial color background image library; calculating a difference value between the current image and the initial color background; and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background. The invention improves the background precision and the calculation speed, and is beneficial to improving the efficiency and the accuracy of video monitoring scene analysis.

Description

Fast self-adaptive real-time color background extraction method
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to a color background extraction method.
Background
Nowadays, with the development of computer vision and image processing technology, intelligent management of video data gradually receives great attention. Understanding and analyzing scenes based on videos is an important research direction, mainly reflects classification and positioning of objects and event detection, and is widely applied in real life, for example, in the aspect of safety control, abnormal targets and behaviors can be recognized; in the aspect of intelligent transportation, detection, classification and the like of moving vehicles can be realized. At present, the scene analysis methods mainly include two types, namely scene analysis based on a deep neural network and scene analysis based on a background. The former needs to collect a plurality of video frames to train a neural network, and then identifies abnormal behaviors or moving targets in a scene by utilizing a trained neural network model, although the precision of the analysis method is higher, the analysis method usually has a better effect only on the scene for training the neural network, and when the video scene is changed, the effect is greatly reduced, so the universality is poorer; meanwhile, the method needs a large amount of calculation and cannot meet the real-time requirement.
The background-based scene analysis method is used for realizing classification, positioning and event detection of objects by extracting the background of a video scene and analyzing the difference between a current video frame and the background. In contrast, background-based scene analysis methods show great advantages in both universality and real-time. The method does not need to collect training data and can be suitable for various scenes; meanwhile, compared with a method based on a deep neural network, the method has the advantages of higher calculation speed and capability of meeting the real-time requirement. The effect of background-based scene analysis tends to depend on the accuracy of the extracted background. However, due to the change of the actual environment, it is difficult to extract a high-precision real-time color background; meanwhile, since the extraction of the background is the basis of the subsequent scene analysis, the operation efficiency needs to be improved, and the upper-layer application is served with more free computing power; more importantly, the existing background extraction results are mostly gray backgrounds, a large amount of data information is lost, and subsequent scene analysis is not facilitated. Based on the method, the method has important practical significance for quickly extracting the real-time and high-precision color background.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a fast self-adaptive real-time color background extraction method.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a fast self-adaptive real-time color background extraction method is characterized by comprising the following steps:
(1) loading a current image based on a monitoring video acquired in real time, storing the current image into an image library, and updating the number of images in the image library;
(2) setting the image sequence number of the current image as the number of images contained in the updated image library; if the image sequence number of the current image is an integral multiple of the set initial color background updating frequency, the step (3) is carried out, otherwise, the step (5) is directly carried out;
(3) updating an image library for extracting an initial color background;
(4) extracting and updating an initial color background based on the updated initial color background image library;
(5) calculating a difference value between the current image and the initial color background;
(6) and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background.
Further, in step (3), with the image number [ K-Rate, K ] as a range, selecting K frames of images at equal intervals in the image library, updating the image library for extracting the initial color background, and the selected image number n satisfies the following formula:
Figure BDA0002973564070000021
where K is the number of images included in the updated image library, K is Rate × N, Rate is the update frequency of the set initial color background, N is a positive integer, K is the number of images in the set initial color background image library, and satisfies K ≦ Rate, [ x ] denotes taking the maximum integer not exceeding x.
Further, in step (4), an image is computed for each pixel location (i, j) in the updated initial color background image library
Figure BDA0002973564070000031
Average of three color channels of
Figure BDA0002973564070000032
Figure BDA0002973564070000033
Wherein the content of the first and second substances,
Figure BDA0002973564070000034
respectively, are images at pixel positions (i, j)
Figure BDA0002973564070000035
N is the image
Figure BDA0002973564070000036
The serial number of (2).
Further, in step (4), K average values are averaged for each pixel position (i, j)
Figure BDA0002973564070000037
Sorting from small to large; when K is an even number, the image number corresponding to the average value at the (K/2) position is recorded, and when K is an odd number, the image number corresponding to the average value at the ((K +1)/2) position is recorded, and the recorded image number is denoted as ORD (i, j).
Further, in step (4), in the updated initial color background image library, for each pixel position (i, j), the pixel value of the image with the image number ORD (i, j) at the pixel position (i, j) is taken as the initial color background value of the pixel position (i, j):
Figure BDA0002973564070000038
wherein the content of the first and second substances,
Figure BDA0002973564070000039
respectively three color channel values of the initial color background at pixel location (i, j),
Figure BDA00029735640700000310
three color channel values at pixel location (i, j) for an image with image number ORD (i, j), respectively.
Further, in step (5), the difference value between the current image and the initial color background is as follows:
Figure BDA00029735640700000311
wherein the content of the first and second substances,
Figure BDA00029735640700000312
is the difference value of the current image and the initial color background;
Figure BDA00029735640700000313
and
Figure BDA00029735640700000314
are respectively the current image
Figure BDA00029735640700000315
And an initial colored background
Figure BDA00029735640700000316
Three-channel average of (1):
Figure BDA0002973564070000041
Figure BDA0002973564070000042
wherein the content of the first and second substances,
Figure BDA0002973564070000043
respectively, the current image at pixel position (i, j)
Figure BDA0002973564070000044
Three color channel values of (a);
Figure BDA0002973564070000045
respectively, an initial colored background at pixel location (i, j)
Figure BDA0002973564070000046
Three color channel values.
Further, in step (6), the initial color background weight and the current image weight are adaptively adjusted as the pixel position (i, j) changes:
Figure BDA0002973564070000047
wherein the content of the first and second substances,
Figure BDA0002973564070000048
and
Figure BDA0002973564070000049
an initial color background weight and a current image weight at pixel location (i, j), respectively,
Figure BDA00029735640700000410
is that
Figure BDA00029735640700000411
The value at pixel location (i, j).
Further, a fast adaptive real-time color background extraction method as claimed in claim 1, wherein in step (6), if the initial color background weight of the pixel position (i, j) is set
Figure BDA00029735640700000412
Satisfies the following conditions:
Figure BDA00029735640700000413
the real-time color background value for pixel location (i, j):
Figure BDA00029735640700000414
otherwise the real-time color background value of pixel location (i, j):
Figure BDA00029735640700000415
wherein the content of the first and second substances,
Figure BDA00029735640700000416
respectively, three color channel values of the real-time color background at pixel location (i, j).
Adopt the beneficial effect that above-mentioned technical scheme brought:
the invention realizes the extraction of the real-time color background, improves the precision and the calculation speed of the background, is beneficial to improving the efficiency and the accuracy of the video monitoring scene analysis, and has very important practical significance.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a current image in the embodiment;
FIG. 3 is a schematic three-channel diagram corresponding to FIG. 2, wherein (a), (b), and (c) correspond to R, G, B, respectively;
FIG. 4 is an initial color background diagram in an embodiment;
FIG. 5 is a schematic three-way diagram corresponding to FIG. 4, wherein (a), (b), and (c) correspond to R, G, B, respectively;
FIG. 6 is a schematic diagram of three-channel average values of a current image and an initial color background in an embodiment, where (a) and (b) correspond to the current image and the initial color background, respectively;
FIG. 7 is a differential value and two-dimensional representation of the current image and the initial color background in the embodiment, wherein (a), (b), and (c) correspond to the differential value graph, the two-dimensional representation of the horizontal position, and the two-dimensional representation of the vertical position, respectively;
FIG. 8 is a real-time color background map of an embodiment;
FIG. 9 is a schematic three-way view of FIG. 8, wherein (a), (b), and (c) correspond to R, G, B, respectively;
FIG. 10 is a differential value and two-dimensional representation of the current image and the real-time color background in the embodiment, wherein (a), (b), and (c) correspond to the differential value graph, the two-dimensional representation of the horizontal position, and the two-dimensional representation of the vertical position, respectively.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
In the embodiment, the video frame Rate is 30 frames/second, the update frequency Rate of the initial color background is set to 900 frames, and the number of the images K in the initial color background image library is set to 30 frames, so that the initial color background is updated only once every 15min, and the speed of the technical scheme can be increased. To go through all steps, the number of images contained in the current image library is set to 2699 frames. As shown in fig. 1, the steps of this embodiment are as follows:
s1: and loading the current image based on the monitoring video acquired in real time and storing the current image in an image library. The number of images in the image library is updated. The method comprises the following specific steps:
the number of images contained in the current image library is 2699 frames, and after the current image is loaded and stored in the image library, the number of images contained in the updated image library is 2700 frames.
S2: and setting the image sequence number of the current image as the number of the images contained in the updated image library. If the image sequence number of the current image is an integral multiple of the set initial color background updating frequency, the step is switched to S3, otherwise, the step is directly switched to S5. The method comprises the following specific steps:
the number of images included in the updated image library is 2700, so the image number k of the current image is 2700. Of the current image is
Figure BDA0002973564070000061
As shown in fig. 2. To facilitate viewing of the three color channel values of the current image, in this embodiment, the X-axis represents the horizontal coordinate of the pixel, the Y-axis represents the vertical coordinate of the pixel, the Z-axis represents the three color channel values at each pixel position (i, j) in the current image, and a three-dimensional rectangular coordinate system is established, as shown in (a), (b), and (c) of fig. 3, which are the R values of the current image respectively
Figure BDA0002973564070000062
G value
Figure BDA0002973564070000063
And B value
Figure BDA0002973564070000064
Comparing (a), (b) and (c) in fig. 3, it can be seen that there is a certain difference between the three color channel values of the current image, but the basic data trend is more consistent.
If the image sequence number of the current image is an integer multiple of the initial color background update frequency, that is, if equation (1) is satisfied, go to S3, otherwise, go directly to S5, that is:
k=Rate×N (1)
in the formula (1), k is the image number of the current image; rate is the update frequency of the initial color background, and the unit is a frame; n is a positive integer. The image number of the current image is 2700 and the update frequency of the initial color background is 900 frames, so that the image number of the current image is 3 times the update frequency of the initial color background, and the process proceeds to S3 if equation (1) is satisfied;
s3: the image library used to extract the initial color background is updated. The method comprises the following specific steps:
and selecting K frames of images at equal intervals in the image library by taking the image serial number [ K-Rate, K ] as a range, and updating the image library for extracting the initial color background. The selected image sequence number n satisfies formula (2):
Figure BDA0002973564070000071
in the formula (2), n is the selected image serial number; k is the image quantity of the set initial color background image library, the unit is a frame, and K is less than or equal to Rate; [ x ] is a rounding function, taking the largest integer not exceeding the real number x.
The image number of the current image is 2700, the updating frequency of the initial color background is 900 frames, and the number of the images in the initial color background image library is 30 frames, so that 30 frames of images are selected at equal intervals in the image library by taking the image number [1800, 2700] as a range to update the initial color background image library. Using equation (2), the image numbers selected are 1801, 1831, 1861, 1891, 1921, 1951, 1981, 2011, 2041, 2071, 2101, 2131, 2161, 2191, 2221, 2251, 2281, 2311, 2341, 2371, 2401, 2431, 2461, 2491, 2521, 2551, 2581, 2611, 2641, 2671.
S4: based on the updated initial color background image library, an initial color background is extracted and updated. The method comprises the following specific steps:
in the updated initial color background image library, for each pixel position (i, j), an image is computed
Figure BDA0002973564070000077
The average of the three color channels is shown in equation (3):
Figure BDA0002973564070000072
in the formula (3), the reaction mixture is,
Figure BDA0002973564070000073
and
Figure BDA0002973564070000074
at pixel positions (i, j), respectively, of the image
Figure BDA0002973564070000075
The three color channel values and the three channel average value; n is the image number of the image in the initial color background image library, and satisfies the formula (2).
For each pixel position (i, j), the K average values
Figure BDA0002973564070000076
Sorting is performed from small to large. When K is an even number, recording the image sequence number corresponding to the average value at the (K/2) position, and recording the image sequence number as ORD (i, j); when K is an odd number, the image number corresponding to the average value at the ((K +1)/2) position is recorded.
For each frame image in the updated initial color background image library, the average value of the three color channels at each pixel position of the frame image is calculated by using the formula (3). The number of images in the initial color background image library is 30, so that at each pixel position, 30 three-channel average values are sorted from small to large. Since 30 is an even number, the image number corresponding to the 15 th average value is recorded as ORD (i, j). The recorded results of part ORD (i, j) are shown in Table 1:
TABLE 1 record of part ORD (i, j)
Figure BDA0002973564070000081
In the updated initial color background image library, for each pixel position (i, j), the pixel value of the image with the image number ORD (i, j) at the pixel position (i, j) is taken as the initial color background value of the pixel position (i, j), as shown in equation (4):
Figure BDA0002973564070000082
in the formula (4), the reaction mixture is,
Figure BDA0002973564070000083
and
Figure BDA0002973564070000084
three color channel values of the initial color background at pixel location (i, j), respectively; ORD (i, j) is at pixel location (i, j) for selecting a color channel valueThe image number. The updated initial color background is expressed as
Figure BDA0002973564070000085
Combining the image sequence numbers ORD (i, j) corresponding to the recorded pixel positions (i, j), extracting and updating the initial color background by using the formula (4)
Figure BDA0002973564070000086
FIG. 4 is an initial color background
Figure BDA0002973564070000087
In FIG. 5, (a), (b) and (c) are initial color backgrounds, respectively
Figure BDA0002973564070000088
R value of
Figure BDA0002973564070000089
G value
Figure BDA00029735640700000810
And B value
Figure BDA00029735640700000811
Comparing fig. 4 and fig. 2, from the image level, the color background of the video scene has been substantially extracted, and the updated initial color background does not include the foreground. However, the subsequent scene analysis needs to be performed at the pixel level, so the initial color background needs to be further optimized at the pixel level to extract a high-precision real-time color background.
S5: calculating the difference value of the current image and the initial color background. The method comprises the following specific steps:
further, in step S5, the difference value between the current image and the initial color background is calculated as shown in equation (5):
Figure BDA0002973564070000091
in the formula (5), the reaction mixture is,
Figure BDA0002973564070000092
is the difference value of the current image and the initial color background;
Figure BDA0002973564070000093
and
Figure BDA0002973564070000094
are respectively the current image
Figure BDA0002973564070000095
And an initial colored background
Figure BDA0002973564070000096
The three-channel average value of (2) is calculated as shown in equations (6) and (7):
Figure BDA0002973564070000097
Figure BDA0002973564070000098
in the formula (6), the reaction mixture is,
Figure BDA0002973564070000099
at pixel positions (i, j), respectively, of the current image
Figure BDA00029735640700000910
Three color channel values of (a); in the formula (7), the reaction mixture is,
Figure BDA00029735640700000911
at pixel positions (i, j), respectively, an initial colored background
Figure BDA00029735640700000912
Three color channel values of (a);
to extract a high-precision real-time color background, it is first necessary to analyze the current image and the initial color backgroundThe differential value. Using equations (6) and (7), three-channel averages of the current image and the initial color background are calculated, respectively
Figure BDA00029735640700000913
And
Figure BDA00029735640700000914
as shown in (a) and (b) in fig. 6. Comparing (a) and (b) in fig. 6, it can be found that the three-channel average values of the current image and the initial color background are more consistent with the data trends of the three color channel values thereof, and the three-channel average values can also better describe the contour of the color image.
Calculating the difference value between the current image and the initial color background by using the formula (5)
Figure BDA00029735640700000915
Three-dimensional representation thereof is shown in fig. 7 (a). It can be seen that the difference between the current image and the initial color background is mainly reflected in the positions of the two foreground objects in fig. 2, but there is a certain difference between the background positions. To further analyze the difference between the current image and the initial color background, the X-axis represents the horizontal coordinate and the vertical coordinate of the pixel point, and the Y-axis represents the difference between the current image and the initial color background, so as to establish a two-dimensional rectangular coordinate system, as shown in (b) and (c) of fig. 7. The difference between the current image and the original color background can be observed from the transverse direction and the longitudinal direction respectively, and the difference of the positions of the rest backgrounds is also prominent except for two peaks. The color background with higher precision only has larger difference from the current image at the foreground position and has smaller difference from the background position, so that the point that the precision of the initial color background is lower can be shown, and the establishment of the real-time color background with high precision is necessary.
S6: and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background. The method comprises the following specific steps:
the initial color background weight and the current image weight are not empirical values, but are adaptively adjusted as the pixel position (i, j) changes, and are calculated as shown in equation (8):
Figure BDA0002973564070000101
in the formula (8), the reaction mixture is,
Figure BDA0002973564070000102
and
Figure BDA0002973564070000103
the initial color background weight and the current image weight for pixel location (i, j), respectively.
Using equation (8), the initial color background weight for each pixel location (i, j) is calculated
Figure BDA0002973564070000104
And current image weight
Figure BDA0002973564070000105
Partial initial color background weight
Figure BDA0002973564070000106
And current image weight
Figure BDA0002973564070000107
The results of the calculations are shown in tables 2 and 3:
TABLE 2 part initial color background weights
Figure BDA0002973564070000108
Result of calculation of (2)
Figure BDA0002973564070000109
Figure BDA0002973564070000111
TABLE 3 partial Current image weights
Figure BDA0002973564070000112
Result of calculation of (2)
Figure BDA0002973564070000113
If the initial color background weight for pixel location (i, j) satisfies equation (9),
Figure BDA0002973564070000114
the real-time color background value for pixel location (i, j) is calculated as shown in equation (10):
Figure BDA0002973564070000115
otherwise, the real-time color background value of the pixel position (i, j) is calculated as shown in equation (11):
Figure BDA0002973564070000116
in the formulae (10) and (11),
Figure BDA0002973564070000117
and
Figure BDA0002973564070000118
respectively, three color channel values of the real-time color background at pixel location (i, j). The updated initial color background is expressed as
Figure BDA0002973564070000119
The real-time color background of the current image is calculated using equations (9), (10) and (11). FIG. 8 is a real-time color background
Figure BDA0002973564070000121
In FIG. 9, (a), (b) and (c) are real-time color backgrounds, respectively
Figure BDA0002973564070000122
R value of
Figure BDA0002973564070000123
G value
Figure BDA0002973564070000124
And B value
Figure BDA0002973564070000125
Comparing fig. 8 and fig. 4, at the image level, the initial color background and the real-time color background are very similar, and the real-time color background also contains no foreground.
To further analyze the accuracy of the real-time color background from the pixel level, the difference between the current image and the real-time color background is calculated as shown in equation (12):
Figure BDA0002973564070000126
in the formula (12), the reaction mixture is,
Figure BDA0002973564070000127
is the difference value between the current image and the real-time color background;
Figure BDA0002973564070000128
is a three-channel average of the real-time color background, calculated as shown in equation (13):
Figure BDA0002973564070000129
in the formula (13), the reaction mixture is,
Figure BDA00029735640700001210
and
Figure BDA00029735640700001211
are respectively carried out at pixel positions (i, j)Three color channel values of a colored background.
Calculating the difference value between the current image and the real-time color background by using the formula (12)
Figure BDA00029735640700001212
As shown in fig. 10 (a), the differential values in the lateral and longitudinal directions thereof are shown in fig. 10 (b) and (c). The difference between the real-time color background and the current image is large at the foreground position, generally higher than 30, and small at the background position, substantially lower than 20. Comparing the extraction result of the initial color background, it can be found that the difference between the initial color background and the current image at the background position is large, even close to 40. Compared with the initial color background, the precision of the real-time color background is obviously improved. The real-time color background constructed by the invention has larger difference with the current image only at the foreground position, and has smaller difference with the current image at the background position, so the method provided by the invention can quickly obtain the high-precision real-time color background.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (8)

1. A fast self-adaptive real-time color background extraction method is characterized by comprising the following steps:
(1) loading a current image based on a monitoring video acquired in real time, storing the current image into an image library, and updating the number of images in the image library;
(2) setting the image sequence number of the current image as the number of images contained in the updated image library; if the image sequence number of the current image is an integral multiple of the set initial color background updating frequency, the step (3) is carried out, otherwise, the step (5) is directly carried out;
(3) updating an image library for extracting an initial color background;
(4) extracting and updating an initial color background based on the updated initial color background image library;
(5) calculating a difference value between the current image and the initial color background;
(6) and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background.
2. The fast adaptive real-time color background extraction method according to claim 1, wherein in step (3), with the image number [ K-Rate, K ] as a range, K frames of images are selected at equal intervals in the image library, and the image library for extracting the initial color background is updated, wherein the selected image number n satisfies the following formula:
Figure FDA0002973564060000011
where K is the number of images included in the updated image library, K is Rate × N, Rate is the update frequency of the set initial color background, N is a positive integer, K is the number of images in the set initial color background image library, and satisfies K ≦ Rate, [ x ] denotes taking the maximum integer not exceeding x.
3. The fast adaptive real-time color background extraction method according to claim 2, wherein in step (4), an image is computed for each pixel position (i, j) in the updated initial color background image library
Figure FDA0002973564060000012
Average of three color channels of
Figure FDA0002973564060000013
Figure FDA0002973564060000014
Wherein the content of the first and second substances,
Figure FDA0002973564060000015
respectively, are images at pixel positions (i, j)
Figure FDA0002973564060000016
N is the image
Figure FDA0002973564060000021
The serial number of (2).
4. A fast adaptive real-time color background extraction method according to claim 3, characterized in that in step (4), K average values are averaged for each pixel position (i, j)
Figure FDA0002973564060000022
Sorting from small to large; when K is an even number, the image number corresponding to the average value at the (K/2) position is recorded, and when K is an odd number, the image number corresponding to the average value at the ((K +1)/2) position is recorded, and the recorded image number is denoted as ORD (i, j).
5. The fast adaptive real-time color background extraction method according to claim 4, wherein in step (4), for each pixel position (i, j) in the updated initial color background image library, the pixel value of the image with the image number ORD (i, j) at the pixel position (i, j) is taken as the initial color background value of the pixel position (i, j):
Figure FDA0002973564060000023
wherein the content of the first and second substances,
Figure FDA0002973564060000024
respectively three color channel values of the initial color background at pixel location (i, j),
Figure FDA0002973564060000025
three color channel values at pixel location (i, j) for an image with image number ORD (i, j), respectively.
6. The fast adaptive real-time color background extraction method according to claim 5, wherein in step (5), the difference value between the current image and the initial color background is as follows:
Figure FDA0002973564060000026
wherein the content of the first and second substances,
Figure FDA0002973564060000027
is the difference value of the current image and the initial color background;
Figure FDA0002973564060000028
and
Figure FDA0002973564060000029
are respectively the current image
Figure FDA00029735640600000210
And an initial colored background
Figure FDA00029735640600000211
Three-channel average of (1):
Figure FDA00029735640600000212
Figure FDA00029735640600000213
wherein the content of the first and second substances,
Figure FDA0002973564060000031
respectively, the current image at pixel position (i, j)
Figure FDA0002973564060000032
Three color channel values of (a);
Figure FDA0002973564060000033
respectively, an initial colored background at pixel location (i, j)
Figure FDA0002973564060000034
Three color channel values.
7. The fast adaptive real-time color background extraction method according to claim 6, wherein in step (6), the initial color background weight and the current image weight are adaptively adjusted as the pixel position (i, j) changes:
Figure FDA0002973564060000035
wherein the content of the first and second substances,
Figure FDA0002973564060000036
and
Figure FDA0002973564060000037
an initial color background weight and a current image weight at pixel location (i, j), respectively,
Figure FDA0002973564060000038
is that
Figure FDA0002973564060000039
The value at pixel location (i, j).
8. The fast adaptive real-time color background extraction method as claimed in claim 7, wherein the fast adaptive real-time color background extraction method is characterized in thatIn step (6), if the initial color background weight of the pixel location (i, j)
Figure FDA00029735640600000310
Satisfies the following conditions:
Figure FDA00029735640600000311
the real-time color background value for pixel location (i, j):
Figure FDA00029735640600000312
otherwise the real-time color background value of pixel location (i, j):
Figure FDA00029735640600000313
wherein the content of the first and second substances,
Figure FDA00029735640600000314
respectively, three color channel values of the real-time color background at pixel location (i, j).
CN202110269289.7A 2021-03-12 2021-03-12 Quick self-adaptive real-time color background extraction method Active CN112991293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110269289.7A CN112991293B (en) 2021-03-12 2021-03-12 Quick self-adaptive real-time color background extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110269289.7A CN112991293B (en) 2021-03-12 2021-03-12 Quick self-adaptive real-time color background extraction method

Publications (2)

Publication Number Publication Date
CN112991293A true CN112991293A (en) 2021-06-18
CN112991293B CN112991293B (en) 2024-04-26

Family

ID=76334611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110269289.7A Active CN112991293B (en) 2021-03-12 2021-03-12 Quick self-adaptive real-time color background extraction method

Country Status (1)

Country Link
CN (1) CN112991293B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110175984A1 (en) * 2010-01-21 2011-07-21 Samsung Electronics Co., Ltd. Method and system of extracting the target object data on the basis of data concerning the color and depth
CN109859236A (en) * 2019-01-02 2019-06-07 广州大学 Mobile object detection method, calculates equipment and storage medium at system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110175984A1 (en) * 2010-01-21 2011-07-21 Samsung Electronics Co., Ltd. Method and system of extracting the target object data on the basis of data concerning the color and depth
CN109859236A (en) * 2019-01-02 2019-06-07 广州大学 Mobile object detection method, calculates equipment and storage medium at system

Also Published As

Publication number Publication date
CN112991293B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN109919032B (en) Video abnormal behavior detection method based on motion prediction
CN108288270B (en) Target detection method based on channel pruning and full convolution deep learning
CN106157329B (en) Self-adaptive target tracking method and device
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN111950723A (en) Neural network model training method, image processing method, device and terminal equipment
CN111079539B (en) Video abnormal behavior detection method based on abnormal tracking
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN107527370B (en) Target tracking method based on camshift
CN112364865B (en) Method for detecting small moving target in complex scene
CN111652869A (en) Slab void identification method, system, medium and terminal based on deep learning
CN115512251A (en) Unmanned aerial vehicle low-illumination target tracking method based on double-branch progressive feature enhancement
CN111325204A (en) Target detection method, target detection device, electronic equipment and storage medium
CN112508851A (en) Mud rock lithology recognition system based on CNN classification algorithm
CN112560799B (en) Unmanned aerial vehicle intelligent vehicle target detection method based on adaptive target area search and game and application
CN110930436B (en) Target tracking method and device
CN103578121B (en) Method for testing motion based on shared Gauss model under disturbed motion environment
CN110889347B (en) Density traffic flow counting method and system based on space-time counting characteristics
CN112991293A (en) Fast self-adaptive real-time color background extraction method
CN110322479B (en) Dual-core KCF target tracking method based on space-time significance
US10776932B2 (en) Determining whether ground is to be re-detected
CN105184809A (en) Moving object detection method and moving object detection device
CN109657577B (en) Animal detection method based on entropy and motion offset
CN115294035B (en) Bright spot positioning method, bright spot positioning device, electronic equipment and storage medium
CN113255549B (en) Intelligent recognition method and system for behavior state of wolf-swarm hunting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant