CN112991293A - Fast self-adaptive real-time color background extraction method - Google Patents
Fast self-adaptive real-time color background extraction method Download PDFInfo
- Publication number
- CN112991293A CN112991293A CN202110269289.7A CN202110269289A CN112991293A CN 112991293 A CN112991293 A CN 112991293A CN 202110269289 A CN202110269289 A CN 202110269289A CN 112991293 A CN112991293 A CN 112991293A
- Authority
- CN
- China
- Prior art keywords
- color background
- image
- initial color
- real
- current image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 20
- 238000012544 monitoring process Methods 0.000 claims abstract description 6
- 239000000126 substance Substances 0.000 claims description 12
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 238000001454 recorded image Methods 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 13
- 238000004364 calculation method Methods 0.000 abstract description 7
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000000034 method Methods 0.000 description 15
- 239000011541 reaction mixture Substances 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a fast self-adaptive real-time color background extraction method, which comprises the following steps: loading a current image based on a monitoring video acquired in real time, storing the current image into an image library, and updating the number of images in the image library; setting the image sequence number of the current image as the number of images contained in the updated image library; updating an image library for extracting an initial color background; extracting and updating an initial color background based on the updated initial color background image library; calculating a difference value between the current image and the initial color background; and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background. The invention improves the background precision and the calculation speed, and is beneficial to improving the efficiency and the accuracy of video monitoring scene analysis.
Description
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to a color background extraction method.
Background
Nowadays, with the development of computer vision and image processing technology, intelligent management of video data gradually receives great attention. Understanding and analyzing scenes based on videos is an important research direction, mainly reflects classification and positioning of objects and event detection, and is widely applied in real life, for example, in the aspect of safety control, abnormal targets and behaviors can be recognized; in the aspect of intelligent transportation, detection, classification and the like of moving vehicles can be realized. At present, the scene analysis methods mainly include two types, namely scene analysis based on a deep neural network and scene analysis based on a background. The former needs to collect a plurality of video frames to train a neural network, and then identifies abnormal behaviors or moving targets in a scene by utilizing a trained neural network model, although the precision of the analysis method is higher, the analysis method usually has a better effect only on the scene for training the neural network, and when the video scene is changed, the effect is greatly reduced, so the universality is poorer; meanwhile, the method needs a large amount of calculation and cannot meet the real-time requirement.
The background-based scene analysis method is used for realizing classification, positioning and event detection of objects by extracting the background of a video scene and analyzing the difference between a current video frame and the background. In contrast, background-based scene analysis methods show great advantages in both universality and real-time. The method does not need to collect training data and can be suitable for various scenes; meanwhile, compared with a method based on a deep neural network, the method has the advantages of higher calculation speed and capability of meeting the real-time requirement. The effect of background-based scene analysis tends to depend on the accuracy of the extracted background. However, due to the change of the actual environment, it is difficult to extract a high-precision real-time color background; meanwhile, since the extraction of the background is the basis of the subsequent scene analysis, the operation efficiency needs to be improved, and the upper-layer application is served with more free computing power; more importantly, the existing background extraction results are mostly gray backgrounds, a large amount of data information is lost, and subsequent scene analysis is not facilitated. Based on the method, the method has important practical significance for quickly extracting the real-time and high-precision color background.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a fast self-adaptive real-time color background extraction method.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a fast self-adaptive real-time color background extraction method is characterized by comprising the following steps:
(1) loading a current image based on a monitoring video acquired in real time, storing the current image into an image library, and updating the number of images in the image library;
(2) setting the image sequence number of the current image as the number of images contained in the updated image library; if the image sequence number of the current image is an integral multiple of the set initial color background updating frequency, the step (3) is carried out, otherwise, the step (5) is directly carried out;
(3) updating an image library for extracting an initial color background;
(4) extracting and updating an initial color background based on the updated initial color background image library;
(5) calculating a difference value between the current image and the initial color background;
(6) and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background.
Further, in step (3), with the image number [ K-Rate, K ] as a range, selecting K frames of images at equal intervals in the image library, updating the image library for extracting the initial color background, and the selected image number n satisfies the following formula:
where K is the number of images included in the updated image library, K is Rate × N, Rate is the update frequency of the set initial color background, N is a positive integer, K is the number of images in the set initial color background image library, and satisfies K ≦ Rate, [ x ] denotes taking the maximum integer not exceeding x.
Further, in step (4), an image is computed for each pixel location (i, j) in the updated initial color background image libraryAverage of three color channels of
Wherein the content of the first and second substances,respectively, are images at pixel positions (i, j)N is the imageThe serial number of (2).
Further, in step (4), K average values are averaged for each pixel position (i, j)Sorting from small to large; when K is an even number, the image number corresponding to the average value at the (K/2) position is recorded, and when K is an odd number, the image number corresponding to the average value at the ((K +1)/2) position is recorded, and the recorded image number is denoted as ORD (i, j).
Further, in step (4), in the updated initial color background image library, for each pixel position (i, j), the pixel value of the image with the image number ORD (i, j) at the pixel position (i, j) is taken as the initial color background value of the pixel position (i, j):
wherein the content of the first and second substances,respectively three color channel values of the initial color background at pixel location (i, j),three color channel values at pixel location (i, j) for an image with image number ORD (i, j), respectively.
Further, in step (5), the difference value between the current image and the initial color background is as follows:
wherein the content of the first and second substances,is the difference value of the current image and the initial color background;andare respectively the current imageAnd an initial colored backgroundThree-channel average of (1):
wherein the content of the first and second substances,respectively, the current image at pixel position (i, j)Three color channel values of (a);respectively, an initial colored background at pixel location (i, j)Three color channel values.
Further, in step (6), the initial color background weight and the current image weight are adaptively adjusted as the pixel position (i, j) changes:
wherein the content of the first and second substances,andan initial color background weight and a current image weight at pixel location (i, j), respectively,is thatThe value at pixel location (i, j).
Further, a fast adaptive real-time color background extraction method as claimed in claim 1, wherein in step (6), if the initial color background weight of the pixel position (i, j) is setSatisfies the following conditions:
the real-time color background value for pixel location (i, j):
otherwise the real-time color background value of pixel location (i, j):
wherein the content of the first and second substances,respectively, three color channel values of the real-time color background at pixel location (i, j).
Adopt the beneficial effect that above-mentioned technical scheme brought:
the invention realizes the extraction of the real-time color background, improves the precision and the calculation speed of the background, is beneficial to improving the efficiency and the accuracy of the video monitoring scene analysis, and has very important practical significance.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a current image in the embodiment;
FIG. 3 is a schematic three-channel diagram corresponding to FIG. 2, wherein (a), (b), and (c) correspond to R, G, B, respectively;
FIG. 4 is an initial color background diagram in an embodiment;
FIG. 5 is a schematic three-way diagram corresponding to FIG. 4, wherein (a), (b), and (c) correspond to R, G, B, respectively;
FIG. 6 is a schematic diagram of three-channel average values of a current image and an initial color background in an embodiment, where (a) and (b) correspond to the current image and the initial color background, respectively;
FIG. 7 is a differential value and two-dimensional representation of the current image and the initial color background in the embodiment, wherein (a), (b), and (c) correspond to the differential value graph, the two-dimensional representation of the horizontal position, and the two-dimensional representation of the vertical position, respectively;
FIG. 8 is a real-time color background map of an embodiment;
FIG. 9 is a schematic three-way view of FIG. 8, wherein (a), (b), and (c) correspond to R, G, B, respectively;
FIG. 10 is a differential value and two-dimensional representation of the current image and the real-time color background in the embodiment, wherein (a), (b), and (c) correspond to the differential value graph, the two-dimensional representation of the horizontal position, and the two-dimensional representation of the vertical position, respectively.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
In the embodiment, the video frame Rate is 30 frames/second, the update frequency Rate of the initial color background is set to 900 frames, and the number of the images K in the initial color background image library is set to 30 frames, so that the initial color background is updated only once every 15min, and the speed of the technical scheme can be increased. To go through all steps, the number of images contained in the current image library is set to 2699 frames. As shown in fig. 1, the steps of this embodiment are as follows:
s1: and loading the current image based on the monitoring video acquired in real time and storing the current image in an image library. The number of images in the image library is updated. The method comprises the following specific steps:
the number of images contained in the current image library is 2699 frames, and after the current image is loaded and stored in the image library, the number of images contained in the updated image library is 2700 frames.
S2: and setting the image sequence number of the current image as the number of the images contained in the updated image library. If the image sequence number of the current image is an integral multiple of the set initial color background updating frequency, the step is switched to S3, otherwise, the step is directly switched to S5. The method comprises the following specific steps:
the number of images included in the updated image library is 2700, so the image number k of the current image is 2700. Of the current image isAs shown in fig. 2. To facilitate viewing of the three color channel values of the current image, in this embodiment, the X-axis represents the horizontal coordinate of the pixel, the Y-axis represents the vertical coordinate of the pixel, the Z-axis represents the three color channel values at each pixel position (i, j) in the current image, and a three-dimensional rectangular coordinate system is established, as shown in (a), (b), and (c) of fig. 3, which are the R values of the current image respectivelyG valueAnd B valueComparing (a), (b) and (c) in fig. 3, it can be seen that there is a certain difference between the three color channel values of the current image, but the basic data trend is more consistent.
If the image sequence number of the current image is an integer multiple of the initial color background update frequency, that is, if equation (1) is satisfied, go to S3, otherwise, go directly to S5, that is:
k=Rate×N (1)
in the formula (1), k is the image number of the current image; rate is the update frequency of the initial color background, and the unit is a frame; n is a positive integer. The image number of the current image is 2700 and the update frequency of the initial color background is 900 frames, so that the image number of the current image is 3 times the update frequency of the initial color background, and the process proceeds to S3 if equation (1) is satisfied;
s3: the image library used to extract the initial color background is updated. The method comprises the following specific steps:
and selecting K frames of images at equal intervals in the image library by taking the image serial number [ K-Rate, K ] as a range, and updating the image library for extracting the initial color background. The selected image sequence number n satisfies formula (2):
in the formula (2), n is the selected image serial number; k is the image quantity of the set initial color background image library, the unit is a frame, and K is less than or equal to Rate; [ x ] is a rounding function, taking the largest integer not exceeding the real number x.
The image number of the current image is 2700, the updating frequency of the initial color background is 900 frames, and the number of the images in the initial color background image library is 30 frames, so that 30 frames of images are selected at equal intervals in the image library by taking the image number [1800, 2700] as a range to update the initial color background image library. Using equation (2), the image numbers selected are 1801, 1831, 1861, 1891, 1921, 1951, 1981, 2011, 2041, 2071, 2101, 2131, 2161, 2191, 2221, 2251, 2281, 2311, 2341, 2371, 2401, 2431, 2461, 2491, 2521, 2551, 2581, 2611, 2641, 2671.
S4: based on the updated initial color background image library, an initial color background is extracted and updated. The method comprises the following specific steps:
in the updated initial color background image library, for each pixel position (i, j), an image is computedThe average of the three color channels is shown in equation (3):
in the formula (3), the reaction mixture is,andat pixel positions (i, j), respectively, of the imageThe three color channel values and the three channel average value; n is the image number of the image in the initial color background image library, and satisfies the formula (2).
For each pixel position (i, j), the K average valuesSorting is performed from small to large. When K is an even number, recording the image sequence number corresponding to the average value at the (K/2) position, and recording the image sequence number as ORD (i, j); when K is an odd number, the image number corresponding to the average value at the ((K +1)/2) position is recorded.
For each frame image in the updated initial color background image library, the average value of the three color channels at each pixel position of the frame image is calculated by using the formula (3). The number of images in the initial color background image library is 30, so that at each pixel position, 30 three-channel average values are sorted from small to large. Since 30 is an even number, the image number corresponding to the 15 th average value is recorded as ORD (i, j). The recorded results of part ORD (i, j) are shown in Table 1:
TABLE 1 record of part ORD (i, j)
In the updated initial color background image library, for each pixel position (i, j), the pixel value of the image with the image number ORD (i, j) at the pixel position (i, j) is taken as the initial color background value of the pixel position (i, j), as shown in equation (4):
in the formula (4), the reaction mixture is,andthree color channel values of the initial color background at pixel location (i, j), respectively; ORD (i, j) is at pixel location (i, j) for selecting a color channel valueThe image number. The updated initial color background is expressed as
Combining the image sequence numbers ORD (i, j) corresponding to the recorded pixel positions (i, j), extracting and updating the initial color background by using the formula (4)FIG. 4 is an initial color backgroundIn FIG. 5, (a), (b) and (c) are initial color backgrounds, respectivelyR value ofG valueAnd B valueComparing fig. 4 and fig. 2, from the image level, the color background of the video scene has been substantially extracted, and the updated initial color background does not include the foreground. However, the subsequent scene analysis needs to be performed at the pixel level, so the initial color background needs to be further optimized at the pixel level to extract a high-precision real-time color background.
S5: calculating the difference value of the current image and the initial color background. The method comprises the following specific steps:
further, in step S5, the difference value between the current image and the initial color background is calculated as shown in equation (5):
in the formula (5), the reaction mixture is,is the difference value of the current image and the initial color background;andare respectively the current imageAnd an initial colored backgroundThe three-channel average value of (2) is calculated as shown in equations (6) and (7):
in the formula (6), the reaction mixture is,at pixel positions (i, j), respectively, of the current imageThree color channel values of (a); in the formula (7), the reaction mixture is,at pixel positions (i, j), respectively, an initial colored backgroundThree color channel values of (a);
to extract a high-precision real-time color background, it is first necessary to analyze the current image and the initial color backgroundThe differential value. Using equations (6) and (7), three-channel averages of the current image and the initial color background are calculated, respectivelyAndas shown in (a) and (b) in fig. 6. Comparing (a) and (b) in fig. 6, it can be found that the three-channel average values of the current image and the initial color background are more consistent with the data trends of the three color channel values thereof, and the three-channel average values can also better describe the contour of the color image.
Calculating the difference value between the current image and the initial color background by using the formula (5)Three-dimensional representation thereof is shown in fig. 7 (a). It can be seen that the difference between the current image and the initial color background is mainly reflected in the positions of the two foreground objects in fig. 2, but there is a certain difference between the background positions. To further analyze the difference between the current image and the initial color background, the X-axis represents the horizontal coordinate and the vertical coordinate of the pixel point, and the Y-axis represents the difference between the current image and the initial color background, so as to establish a two-dimensional rectangular coordinate system, as shown in (b) and (c) of fig. 7. The difference between the current image and the original color background can be observed from the transverse direction and the longitudinal direction respectively, and the difference of the positions of the rest backgrounds is also prominent except for two peaks. The color background with higher precision only has larger difference from the current image at the foreground position and has smaller difference from the background position, so that the point that the precision of the initial color background is lower can be shown, and the establishment of the real-time color background with high precision is necessary.
S6: and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background. The method comprises the following specific steps:
the initial color background weight and the current image weight are not empirical values, but are adaptively adjusted as the pixel position (i, j) changes, and are calculated as shown in equation (8):
in the formula (8), the reaction mixture is,andthe initial color background weight and the current image weight for pixel location (i, j), respectively.
Using equation (8), the initial color background weight for each pixel location (i, j) is calculatedAnd current image weightPartial initial color background weightAnd current image weightThe results of the calculations are shown in tables 2 and 3:
If the initial color background weight for pixel location (i, j) satisfies equation (9),
the real-time color background value for pixel location (i, j) is calculated as shown in equation (10):
otherwise, the real-time color background value of the pixel position (i, j) is calculated as shown in equation (11):
in the formulae (10) and (11),andrespectively, three color channel values of the real-time color background at pixel location (i, j). The updated initial color background is expressed as
The real-time color background of the current image is calculated using equations (9), (10) and (11). FIG. 8 is a real-time color backgroundIn FIG. 9, (a), (b) and (c) are real-time color backgrounds, respectivelyR value ofG valueAnd B valueComparing fig. 8 and fig. 4, at the image level, the initial color background and the real-time color background are very similar, and the real-time color background also contains no foreground.
To further analyze the accuracy of the real-time color background from the pixel level, the difference between the current image and the real-time color background is calculated as shown in equation (12):
in the formula (12), the reaction mixture is,is the difference value between the current image and the real-time color background;is a three-channel average of the real-time color background, calculated as shown in equation (13):
in the formula (13), the reaction mixture is,andare respectively carried out at pixel positions (i, j)Three color channel values of a colored background.
Calculating the difference value between the current image and the real-time color background by using the formula (12)As shown in fig. 10 (a), the differential values in the lateral and longitudinal directions thereof are shown in fig. 10 (b) and (c). The difference between the real-time color background and the current image is large at the foreground position, generally higher than 30, and small at the background position, substantially lower than 20. Comparing the extraction result of the initial color background, it can be found that the difference between the initial color background and the current image at the background position is large, even close to 40. Compared with the initial color background, the precision of the real-time color background is obviously improved. The real-time color background constructed by the invention has larger difference with the current image only at the foreground position, and has smaller difference with the current image at the background position, so the method provided by the invention can quickly obtain the high-precision real-time color background.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.
Claims (8)
1. A fast self-adaptive real-time color background extraction method is characterized by comprising the following steps:
(1) loading a current image based on a monitoring video acquired in real time, storing the current image into an image library, and updating the number of images in the image library;
(2) setting the image sequence number of the current image as the number of images contained in the updated image library; if the image sequence number of the current image is an integral multiple of the set initial color background updating frequency, the step (3) is carried out, otherwise, the step (5) is directly carried out;
(3) updating an image library for extracting an initial color background;
(4) extracting and updating an initial color background based on the updated initial color background image library;
(5) calculating a difference value between the current image and the initial color background;
(6) and calculating the initial color background weight and the current image weight at each pixel position, and extracting the real-time color background.
2. The fast adaptive real-time color background extraction method according to claim 1, wherein in step (3), with the image number [ K-Rate, K ] as a range, K frames of images are selected at equal intervals in the image library, and the image library for extracting the initial color background is updated, wherein the selected image number n satisfies the following formula:
where K is the number of images included in the updated image library, K is Rate × N, Rate is the update frequency of the set initial color background, N is a positive integer, K is the number of images in the set initial color background image library, and satisfies K ≦ Rate, [ x ] denotes taking the maximum integer not exceeding x.
3. The fast adaptive real-time color background extraction method according to claim 2, wherein in step (4), an image is computed for each pixel position (i, j) in the updated initial color background image libraryAverage of three color channels of
4. A fast adaptive real-time color background extraction method according to claim 3, characterized in that in step (4), K average values are averaged for each pixel position (i, j)Sorting from small to large; when K is an even number, the image number corresponding to the average value at the (K/2) position is recorded, and when K is an odd number, the image number corresponding to the average value at the ((K +1)/2) position is recorded, and the recorded image number is denoted as ORD (i, j).
5. The fast adaptive real-time color background extraction method according to claim 4, wherein in step (4), for each pixel position (i, j) in the updated initial color background image library, the pixel value of the image with the image number ORD (i, j) at the pixel position (i, j) is taken as the initial color background value of the pixel position (i, j):
6. The fast adaptive real-time color background extraction method according to claim 5, wherein in step (5), the difference value between the current image and the initial color background is as follows:
wherein the content of the first and second substances,is the difference value of the current image and the initial color background;andare respectively the current imageAnd an initial colored backgroundThree-channel average of (1):
7. The fast adaptive real-time color background extraction method according to claim 6, wherein in step (6), the initial color background weight and the current image weight are adaptively adjusted as the pixel position (i, j) changes:
8. The fast adaptive real-time color background extraction method as claimed in claim 7, wherein the fast adaptive real-time color background extraction method is characterized in thatIn step (6), if the initial color background weight of the pixel location (i, j)Satisfies the following conditions:
the real-time color background value for pixel location (i, j):
otherwise the real-time color background value of pixel location (i, j):
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110269289.7A CN112991293B (en) | 2021-03-12 | 2021-03-12 | Quick self-adaptive real-time color background extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110269289.7A CN112991293B (en) | 2021-03-12 | 2021-03-12 | Quick self-adaptive real-time color background extraction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112991293A true CN112991293A (en) | 2021-06-18 |
CN112991293B CN112991293B (en) | 2024-04-26 |
Family
ID=76334611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110269289.7A Active CN112991293B (en) | 2021-03-12 | 2021-03-12 | Quick self-adaptive real-time color background extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112991293B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110175984A1 (en) * | 2010-01-21 | 2011-07-21 | Samsung Electronics Co., Ltd. | Method and system of extracting the target object data on the basis of data concerning the color and depth |
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
-
2021
- 2021-03-12 CN CN202110269289.7A patent/CN112991293B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110175984A1 (en) * | 2010-01-21 | 2011-07-21 | Samsung Electronics Co., Ltd. | Method and system of extracting the target object data on the basis of data concerning the color and depth |
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
Also Published As
Publication number | Publication date |
---|---|
CN112991293B (en) | 2024-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108648161B (en) | Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network | |
CN109919032B (en) | Video abnormal behavior detection method based on motion prediction | |
CN108288270B (en) | Target detection method based on channel pruning and full convolution deep learning | |
CN106157329B (en) | Self-adaptive target tracking method and device | |
CN110910421B (en) | Weak and small moving object detection method based on block characterization and variable neighborhood clustering | |
CN111950723A (en) | Neural network model training method, image processing method, device and terminal equipment | |
CN111079539B (en) | Video abnormal behavior detection method based on abnormal tracking | |
CN111242026B (en) | Remote sensing image target detection method based on spatial hierarchy perception module and metric learning | |
CN107527370B (en) | Target tracking method based on camshift | |
CN112364865B (en) | Method for detecting small moving target in complex scene | |
CN111652869A (en) | Slab void identification method, system, medium and terminal based on deep learning | |
CN115512251A (en) | Unmanned aerial vehicle low-illumination target tracking method based on double-branch progressive feature enhancement | |
CN111325204A (en) | Target detection method, target detection device, electronic equipment and storage medium | |
CN112508851A (en) | Mud rock lithology recognition system based on CNN classification algorithm | |
CN112560799B (en) | Unmanned aerial vehicle intelligent vehicle target detection method based on adaptive target area search and game and application | |
CN110930436B (en) | Target tracking method and device | |
CN103578121B (en) | Method for testing motion based on shared Gauss model under disturbed motion environment | |
CN110889347B (en) | Density traffic flow counting method and system based on space-time counting characteristics | |
CN112991293A (en) | Fast self-adaptive real-time color background extraction method | |
CN110322479B (en) | Dual-core KCF target tracking method based on space-time significance | |
US10776932B2 (en) | Determining whether ground is to be re-detected | |
CN105184809A (en) | Moving object detection method and moving object detection device | |
CN109657577B (en) | Animal detection method based on entropy and motion offset | |
CN115294035B (en) | Bright spot positioning method, bright spot positioning device, electronic equipment and storage medium | |
CN113255549B (en) | Intelligent recognition method and system for behavior state of wolf-swarm hunting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |