CN116563327A - Dynamic scene background modeling method based on box diagram - Google Patents

Dynamic scene background modeling method based on box diagram Download PDF

Info

Publication number
CN116563327A
CN116563327A CN202310557491.9A CN202310557491A CN116563327A CN 116563327 A CN116563327 A CN 116563327A CN 202310557491 A CN202310557491 A CN 202310557491A CN 116563327 A CN116563327 A CN 116563327A
Authority
CN
China
Prior art keywords
background
queue
modeling
foreground
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310557491.9A
Other languages
Chinese (zh)
Inventor
王文标
郝友维
时启衡
张谦谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202310557491.9A priority Critical patent/CN116563327A/en
Publication of CN116563327A publication Critical patent/CN116563327A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dynamic scene background modeling and foreground detection method based on a box diagram, which comprises the following steps of: storing the obtained video image into a background queue according to the dynamic frame extraction speed; based on the current background queue state, removing the moving interference target by using a box diagram statistical method, and modeling the background from which the interference target is removed to obtain a background model; and carrying out differential operation on each frame of image of the acquired video by adopting a background model to obtain a prospect, and experiments prove that the dynamic scene background modeling method based on the box diagram can effectively improve modeling efficiency and controllability and has important significance in actual dynamic scene prospect detection tasks.

Description

Dynamic scene background modeling method based on box diagram
Technical Field
The invention belongs to the field of background modeling, and particularly relates to a dynamic scene background modeling method based on a box diagram.
Background
With the development of computer vision technology, the video monitoring industry is widely applied in the fields of intelligent transportation, security monitoring, industrial automation and the like. The background modeling method is taken as a basic image processing technology, and has important significance for realizing target detection, tracking and analysis in dynamic video monitoring.
The core task of the background modeling method is to separate foreground objects (e.g., pedestrians, vehicles, etc.) and background scenes from dynamic scene video sequences for subsequent processing and analysis. The process has important values in the aspects of reducing data volume, reducing calculation complexity, improving the accuracy of target identification and tracking and the like. By modeling the background, the difference between the target object and the background can be effectively distinguished, so that the detection and tracking of the target object are realized.
However, in practical applications, background modeling methods face many challenges, especially in dynamic scenarios, the difficulty and complexity of background modeling are more prominent. The motion of dynamic backgrounds (such as windblows, pedestrians walking through, etc.) in video sequences can interfere with background modeling, and it is difficult to distinguish dynamic background from real foreground objects, thereby resulting in reduced accuracy in object detection and tracking. And the traditional background modeling method generally requires that a static scene is kept in a modeling stage, and moving interference targets appearing in the scene cannot be automatically filtered. In addition, the rest time threshold for absorbing a new stationary object in a scene as background is more difficult to control for the scene change, such as the appearance of a new stationary object in the scene.
Disclosure of Invention
In order to solve the problems that the background modeling is disturbed by the motion generated by the dynamic background (such as windblown grass, walking of pedestrians and the like) in the video sequence, and the dynamic background and the real foreground target are difficult to distinguish from each other, so that the accuracy of target detection and tracking is reduced and the like. And the traditional background modeling method generally requires that a static scene is kept in a modeling stage, and moving interference targets appearing in the scene cannot be automatically filtered. In addition, the rest time threshold for absorbing a new stationary object in a scene as background is more difficult to control for the scene change, such as the appearance of a new stationary object in the scene. The technical scheme adopted by the invention is as follows: a dynamic scene background modeling and foreground detection method based on a box diagram comprises the following steps:
a dynamic scene background modeling and foreground detection method based on a box diagram comprises the following steps:
storing the obtained video image into a background queue according to the dynamic frame extraction speed;
based on the current background queue state, removing the moving interference target by using a box diagram statistical method, and modeling the background from which the interference target is removed to obtain a background model;
and carrying out differential operation on each frame of image of the acquired video by adopting a background model to obtain a prospect.
Further, the process of removing and modeling the moving interference target by using the box diagram statistical method based on the current background queue state to obtain the background model is as follows:
t21, taking the current background queue state, partitioning the background queue in the long dimension and the wide dimension, and calling background queue sub-blocks;
t22, counting pixel values of the background queue sub-blocks sequentially by using a box line diagram counting method, finding out frames shot with the mobile interference targets in each block, and eliminating the frames shot with the mobile interference targets from the background queue sub-blocks;
and T23, modeling the rejected background queue sub-blocks by using statistics, and then splicing the background queue sub-blocks into a background model with the original image size.
Further, the process of saving the acquired video image into the background queue according to the dynamic frame extraction speed is as follows:
obtaining frames from cameras, sending n frames into a background queue at each interval, popping up an old background frame at the head of the frame queue, determining the value of n according to the speed of a moving interference target to be removed in a scene, and associating the value of n with the frame rate for matching the method with cameras with different frame rates, wherein the value of n is determined by the following formula:
n=F*t (1)
wherein F is the frame rate of the camera or the video, t is the time difference between two frames in the background queue, and the value of n is finally determined by designating t.
Further, the background queue is partitioned in a long dimension and a wide dimension, and the partition granularity a is determined according to the size of the moving target appearing in the camera picture.
Further, the process of counting the pixel values of the background queue sub-blocks by sequentially using a box line graph statistical method, finding out the frames of the moving interference targets shot in each block, and eliminating the frames of the moving interference targets shot from the background queue sub-blocks is as follows:
calculating the lower quartile Q1, the upper quartile Q3 and the quartile distance iqr of the background queue sub-block in the queue dimension, and calculating an upper whisker line and a lower whisker line according to the following formula as a standard for defining a normal value and an abnormal value:
lower_bound=Q1 - 1.5 * iqr (2)
upper_bound = Q3 + 1.5 * iqr (3)
wherein: lower_bound is lower whisker and upper_bound is upper whisker;
judging that the values except the upper whisker and the lower whisker are abnormal values, namely, recognizing that the frame sub-block shoots a moving interference target; and eliminating all frames containing abnormal values in the background queue sub-blocks, wherein the eliminated background queues are all pure background frames without foreground.
Further, the statistics background queue direction maximum and minimum values.
Further, each frame of image of the video to be acquired is subjected to differential operation by adopting a background model, so as to obtain the past Cheng Ruxia of the foreground:
acquiring a next frame of image;
and continuously updating the background based on the background model, and judging whether each point in the image is in the background model range by using the current background model, wherein the pixel points which are not in the background range are regarded as the foreground.
Further, the lower speed limit allowed by the moving interference target of the background model is as follows:
wherein: w is the motion distance of the moving interference target in one sub-block in the motion direction; t is the time difference between two frames in the background queue, and the unit is seconds; m is the total length of the background queue; omega is the allowable anomaly rate of the box graph, which determines the maximum anomaly value duty cycle that can be tolerated when the box graph can normally reject outliers from the queue.
Further, the lower limit of the stationary duration of the background model for absorbing the stationary target into the background is as follows:
T>m*t*Ω (5)
stationary objects that are stationary for longer than the lower limit of the stationary period will be absorbed as background.
Further, the background model includes an upper limit and a lower limit where each pixel position is regarded as a background value, each pixel of the frame to be detected is compared with the upper limit and the lower limit of the corresponding position, if the pixel is not in the background value range, the position is regarded as a foreground, and the comparison is according to the following formula:
wherein: x is a diagram to be detected; b0 is dimension 0 of the background model, namely the maximum value of each position of the background queue after being removed; the 1 st dimension of the background model is the minimum value of each position of the background queue after being removed; λ is a denoising factor, and needs to be determined according to a noise fluctuation range captured by a video lens, and a device that is prone to noise needs to increase this value appropriately.
The box diagram-based dynamic scene background modeling and foreground detection method provided by the invention has the following advantages: the method eliminates the adverse effect of moving interference targets in a modeling stage picture on background modeling, and enables the lower limit of the moving speed of the targets to be removed and the lower limit of the stationary time of the stationary objects to be absorbed to be controllable; after abnormal frames in a background queue are removed by using a box line graph in a blocking way, statistic modeling is performed by using a residual pure background queue, and experiments prove that the box line graph dynamic scene background modeling method can effectively improve modeling efficiency and controllability and has important significance in actual dynamic scene foreground detection tasks; the method is an efficient, stable and controllable background modeling method suitable for dynamic scenes, and has important research significance and practical application value
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a dynamic scene background modeling method based on a box diagram;
FIG. 2 (a) is a background queue collected by thread A, (b) is a background lower-limit diagram after interference targets are filtered, and (c) is a background upper-limit diagram after interference targets are filtered;
fig. 3 (a) includes an image of a moving object, and (b) is a thread C foreground detection effect.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other, and the present invention will be described in detail below with reference to the drawings and the embodiments.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise. Meanwhile, it should be clear that the dimensions of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In the description of the present invention, it should be understood that the azimuth or positional relationships indicated by the azimuth terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal", and "top, bottom", etc., are generally based on the azimuth or positional relationships shown in the drawings, merely to facilitate description of the present invention and simplify the description, and these azimuth terms do not indicate and imply that the apparatus or elements referred to must have a specific azimuth or be constructed and operated in a specific azimuth, and thus should not be construed as limiting the scope of protection of the present invention: the orientation word "inner and outer" refers to inner and outer relative to the contour of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "upper surface at … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations of "above … …" and "below … …". The device may also be positioned in other different ways (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
In addition, the terms "first", "second", etc. are used to define the components, and are only for convenience of distinguishing the corresponding components, and the terms have no special meaning unless otherwise stated, and therefore should not be construed as limiting the scope of the present invention.
FIG. 1 is a flow chart of a dynamic scene background modeling method based on a box diagram;
the invention discloses a dynamic scene background modeling and foreground detection method based on a box diagram, which specifically comprises the following steps:
t1, the thread A stores the acquired video image into a background queue according to the dynamic frame extraction speed; the video image is shot by a camera;
t2, thread B eliminates the moving interference target by using a box diagram statistical method based on the current background queue state, and models the background after eliminating the interference target to obtain a background model;
and T3, carrying out differential operation on each frame of image of the acquired video by adopting a background model to obtain a prospect.
The thread T1/T2/T3 is sequentially executed, wherein T2 is started after T1 is updated by frames with at least the same length as the background queue, and T3 is started after at least one modeling of T2 is completed;
the thread B copies the current state of the background queue stored by the thread A as a local variable to carry out background modeling, and simultaneously, the thread A continuously updates a new image transmitted by the camera to the global background queue so that the subsequent modeling uses the latest background queue to carry out modeling.
The process of capturing images from the camera by the thread A according to the dynamic frame extraction speed and storing the images into a background queue is as follows:
every interval n frames, the images transmitted by the current camera are sent into a global background queue for other threads to take, an old image at the head of a frame is popped up, the value of n is determined according to the speed of a moving interference target to be removed in a scene, in order to match the method with cameras with different frame rates, the value of n is related to the frame rates, namely, a high-frame-rate camera is properly lifted by n, and a low-frame-rate camera is lowered by n. The essence of the control n is to control the interval duration between each frame in the background queue to ensure that moving interfering objects within the frame can move sufficiently to expose the background behind it to provide sufficient background information for subsequent culling operations.
The dynamic frame extraction speed is that the value of n is dynamic, n is not unique in the running process of the method, but can be changed according to the state in the current picture, if only a dynamic interference target is detected in the picture, the background queue is normally updated every n frames; when a static target appears in the picture and needs to be quickly absorbed into the background model, n can be temporarily reduced, and the frame is quickly extracted to enable a new static target to appear in the whole background queue as soon as possible so as to ensure that the static target can be quickly absorbed.
The value of n is determined according to the speed of a mobile interference target to be removed in a scene, and the interval duration of two frames is finally controlled by correlating the value of n with the frame rate, and mapping is carried out according to the following formula:
n=F*t (1)
wherein F is the frame rate of the camera or video, t is the time difference between two frames in the background queue, and finally t is indirectly changed by changing n.
FIG. 2 (a) is a background queue collected by thread A, (b) is a background lower-limit diagram after interference targets are filtered, and (c) is a background upper-limit diagram after interference targets are filtered; the method comprises the steps of carrying out a first treatment on the surface of the
The thread B performs mobile interference target elimination by using a box diagram statistical method based on the current background queue state, and models the background after eliminating the interference target, and the process of obtaining a background model is as follows:
t21, taking the current background queue state, partitioning the background queue into blocks in a long dimension and a wide dimension, and calling background queue sub-blocks;
the current state of the background queue stored by the copy thread A is used as a local variable for background modeling, and meanwhile, the thread A continuously updates a new image transmitted by the camera to the global background queue so that the subsequent modeling uses the latest background queue for modeling, and the broken line in the figure 1 represents the parameter transmission in the form. Dividing the stored background queue into sub-blocks of a x a in the long dimension and the wide dimension.
Firstly, the stored background queue is divided into sub-blocks of a x a in long dimension and wide dimension, and a is the block granularity. The blocking operation is to avoid that a small object appears in the picture to totally reject the frame, and when the object moves too slowly, the frame is rejected without blocking, which may result in that all frames in the background queue are rejected. The granularity of the tiles a should be determined according to the size of the possible moving object that appears in the camera frame, and the smaller the moving object, the smaller the size of the tiles is required. The block granularity also determines the lower speed limit of the mobile interference target which can be removed by the algorithm, but the block granularity is not variable in the running process, so that the block granularity is only used for deducing the lower speed limit under the current parameters, and the lower speed limit of the removable target still needs to be realized by modifying n if dynamic control is needed.
T22, counting pixel values of the background queue sub-blocks sequentially by using a box line diagram counting method, finding out frames shot with the mobile interference targets in each block, and eliminating the frames shot with the mobile interference targets from the background queue sub-blocks;
calculating the lower quartile Q1, the upper quartile Q3 and the quartile distance iqr of the background queue sub-block in the queue dimension, and calculating an upper whisker line and a lower whisker line according to the following formula as a standard for defining a normal value and an abnormal value:
lower_bound=Q1 - 1.5 * iqr (2)
upper_bound = Q3 + 1.5 * iqr (3)
wherein: lower_bound is lower whisker and upper_bound is upper whisker;
since each pixel point of the color image contains three RGB color channels, if one of the three channels is abnormal, the pixel point should be judged as abnormal, so after the channel dimension is judged, the judgment result matrix is OR-operated according to the color channel dimension as the judgment result of each final pixel point.
When the abnormal condition is detected to be contained in the judging result matrix, the situation that part of moving interference targets are shot in the image sub-blocks of the frame can be considered, and the image sub-blocks of the frame are removed from the sub-block background queue. If all frames in a certain sub-block are removed, the first frame of the background queue is fixedly reserved as a default background.
And calculating the lower quartile Q1, the upper quartile Q3 and the quartile iqr of the sub-block in the queue dimension by using a box diagram statistical method sequentially for the background queue sub-block, and expanding the range between Q1 and Q3 by using iqr which is 1.5 times as a normal value range, wherein values outside the range are regarded as abnormal values.
The statistics of the lower quartile Q1, the upper quartile Q3 and the like calculated after the matrix is subjected to the box diagram statistical method are actually a statistic array, each pixel value in each frame in the background queue is judged by using the statistic array, whether the pixel value is in the upper whisker and the lower whisker range corresponding to the position of the pixel value is judged, and finally a boolean value matrix is used for marking the judging result of each pixel point.
T23, modeling the rejected background queue sub-blocks by using statistics, and then splicing the background queue sub-blocks into a background model with the size of the original image;
and (3) carrying out statistic modeling on the background queue which nearly contains the pure non-mobile interference target by each sub-block after the sub-block is removed by using the maximum value and the minimum value in sequence, and finally obtaining a model matrix containing the maximum value of each pixel position.
The final background model established by the thread B has the following allowable speed lower limit of the moving interference target:
wherein: w is the motion distance of the moving interference target in one sub-block in the motion direction, and is generally considered to be the maximum diagonal length of one sub-block, and the unit is pixel; t is the time difference between two frames in the background queue, and the unit is seconds; m is the total length of the background queue; omega is the allowable abnormal rate of the box diagram, which determines the maximum abnormal value duty ratio which can be tolerated when the box diagram can normally reject abnormal points from a queue, the value of omega is related to the abnormal degree between the normal value and the abnormal value, the larger the abnormal degree is, the larger the allowable omega value is, but the value of common omega is between 15% and 25%.
The lower speed limit of different allowable interference targets under different parameter values can be deduced by the above formula (4), and targets which can be eliminated are controlled by controlling the parameters, and the lower speed limit allowable by the targets with the speed greater than the threshold value is eliminated from the background. The unit of V is pixel per second, and the actual distance needs to be mapped again according to the position of the camera and lens distortion.
The lower limit of the stationary duration of the background model finally established by the thread B for absorbing the stationary target into the background is as follows:
T>m*t*Ω (5)
a stationary object with a stationary time longer than the lower limit of the stationary time period is absorbed as a background, and the time required for one background modeling is required from the beginning of absorption to the completion of absorption.
Modeling the rejected background queue by using statistics, and then splicing the background queue into a background model with the original image size, wherein the process is as follows:
and (3) carrying out statistic modeling on the background queue which nearly contains the pure non-mobile interference target by each sub-block after the sub-block is removed by using the maximum value and the minimum value in sequence, and finally obtaining a model matrix containing the maximum value of each pixel position.
Note that the statistical model established in this process is the model used in the final foreground detection stage, which, although similar to the statistical model used in the case-line diagram statistical method to reject anomalies, differs essentially from the two in that: the vector to be counted in the box diagram modeling stage may contain both abnormal value and normal value, if the background queue is directly modeled by using Q1 and Q3, the granularity of modeling is pixel level, when the moving interference target appearing in the picture is single in color and long in size, the head and tail ends of the target are removed by the passing area, the middle part is largely appeared in the background queue, so that the box diagram is considered to be the moving target as the background, and the moving target is mixed with the real background for modeling, and finally the final model is not the interference target nor the pure background.
The method only uses a box diagram statistical method to carry out the recognition and elimination of abnormal sub-blocks, does not carry out any statistic modeling task, uses the box diagram to eliminate frames which are considered to have interference targets in all sub-blocks, uses the maximum and minimum values to carry out statistic modeling on the eliminated background queue, and only contains normal values in the vector to be counted at the moment, thereby avoiding the problem that a slow moving object is judged as the background and is confused with the real background. In addition, the lengths of the background queues of the sub-blocks after the background elimination are not necessarily the same, so that the sub-blocks cannot be directly spliced after the elimination, and the sub-blocks are spliced after being modeled as upper and lower limits of pixel values of each point by using statistic modeling. And if all frames in a certain sub-block are removed, the first frame of the background queue is fixedly reserved as a default background.
Fig. 3 (a) includes an image of a moving object, and (B) is a process of performing a difference operation on each frame transmitted from a camera by using a model established by a current thread B to obtain a foreground for the thread C with a foreground detection effect, where the process is as follows:
thread C replicates a current background model as a local variable while thread B continues to update the global background model so that the model can be used by a new frame of the image to be inspected once updated.
Namely: acquiring a next frame of image;
and continuously updating the background based on the current background model, and judging whether each point in the image is in the background model range or not by using the current background model, wherein the pixel points which are not in the background range are regarded as the foreground.
The background model comprises an upper limit and a lower limit which are required by each pixel position as a background value, each pixel of the frame to be detected is compared with the upper limit and the lower limit of the corresponding position, and if the pixel is not in the background value range, the position is regarded as a foreground. In order to avoid the influence of noise shot by the camera, the actual comparison is according to the following formula:
wherein: x is a diagram to be detected; b0 is dimension 0 of the background model, namely the maximum value of each position of the background queue after being removed; the 1 st dimension of the background model is the minimum value of each position of the background queue after being removed; λ is a denoising factor, and needs to be determined according to a noise fluctuation range captured by a video lens, and a device that is prone to noise needs to increase this value appropriately. Finally, a binary mask matrix with the same length and width as the original image is obtained, and the matrix can be directly displayed by a binary image or can be subjected to AND operation with the original image to separate the foreground.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (10)

1. The dynamic scene background modeling and foreground detection method based on the box diagram is characterized by comprising the following steps of:
storing the obtained video image into a background queue according to the dynamic frame extraction speed;
based on the current background queue state, removing the moving interference target by using a box diagram statistical method, and modeling the background from which the interference target is removed to obtain a background model;
and carrying out differential operation on each frame of image of the acquired video by adopting a background model to obtain a prospect.
2. The method for modeling and foreground detection of dynamic scene background based on box diagram according to claim 1, wherein the process of removing and modeling moving interference targets by using a box diagram statistical method based on the current background queue state to obtain a background model is as follows:
t21, taking the current background queue state, partitioning the background queue in the long dimension and the wide dimension, and calling background queue sub-blocks;
t22, counting pixel values of the background queue sub-blocks sequentially by using a box line diagram counting method, finding out frames shot with the mobile interference targets in each block, and eliminating the frames shot with the mobile interference targets from the background queue sub-blocks;
and T23, modeling the rejected background queue sub-blocks by using statistics, and then splicing the background queue sub-blocks into a background model with the original image size.
3. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: the process of storing the acquired video image into the background queue according to the dynamic frame extraction speed is as follows:
obtaining frames from cameras, sending n frames into a background queue at each interval, popping up an old background frame at the head of the frame queue, determining the value of n according to the speed of a moving interference target to be removed in a scene, and associating the value of n with the frame rate for matching the method with cameras with different frame rates, wherein the value of n is determined by the following formula:
n=F*t (1)
wherein F is the frame rate of the camera or the video, t is the time difference between two frames in the background queue, and the value of n is finally determined by designating t.
4. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: and the background queue is partitioned in the long dimension and the wide dimension, and the partition granularity a is determined according to the size of the moving target appearing in the camera picture.
5. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: the process of counting the pixel values of the background queue sub-blocks by sequentially using a box line diagram statistical method, finding out the frames of the moving interference targets shot in each block, and eliminating the frames of the moving interference targets shot from the background queue sub-blocks is as follows:
calculating the lower quartile Q1, the upper quartile Q3 and the quartile distance iqr of the background queue sub-block in the queue dimension, and calculating an upper whisker line and a lower whisker line according to the following formula as a standard for defining a normal value and an abnormal value:
lower_bound=Q1 - 1.5 * iqr (2)
upper_bound = Q3 + 1.5 * iqr (3)
wherein: lower_bound is lower whisker and upper_bound is upper whisker;
judging that the values except the upper whisker and the lower whisker are abnormal values, namely, recognizing that the frame sub-block shoots a moving interference target; and eliminating all frames containing abnormal values in the background queue sub-blocks, wherein the eliminated background queues are all pure background frames without foreground.
6. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: and the statistics background queue direction is maximum and minimum.
7. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: and carrying out differential operation on each frame of image of the acquired video by adopting a background model, wherein the process of obtaining the foreground is as follows:
acquiring a next frame of image;
and continuously updating the background based on the current background model, and judging whether each point in the image is in the background model range or not by using the current background model, wherein the pixel points which are not in the background range are regarded as the foreground.
8. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: the lower limit of the allowable speed of the moving interference target of the background model is as follows:
wherein: w is the motion distance of the moving interference target in one sub-block in the motion direction; t is the time difference between two frames in the background queue, and the unit is seconds; m is the total length of the background queue; omega is the allowable anomaly rate of the box graph, which determines the maximum anomaly value duty cycle that can be tolerated when the box graph can normally reject outliers from the queue.
9. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: the lower limit of the stationary time length of the background model for absorbing the stationary target into the background is as follows:
T>m*t*Ω (5)
a stationary object having a stationary duration greater than this stationary duration will be absorbed as background.
10. The method for modeling and detecting the background and the foreground of the dynamic scene based on the box diagram according to claim 1, wherein the method is characterized by comprising the following steps: the background model comprises an upper limit and a lower limit which are corresponding to the background value of each pixel position, each pixel of the frame to be detected is compared with the upper limit and the lower limit of the corresponding position, if the pixel is not in the background value range, the position is considered as a foreground, and the comparison is according to the following formula:
wherein: x is a diagram to be detected; b0 is dimension 0 of the background model, namely the maximum value of each position of the background queue after being removed; the 1 st dimension of the background model is the minimum value of each position of the background queue after being removed; λ is a denoising factor, and needs to be determined according to a noise fluctuation range captured by a video lens, and a device that is prone to noise needs to increase this value appropriately.
CN202310557491.9A 2023-05-17 2023-05-17 Dynamic scene background modeling method based on box diagram Pending CN116563327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310557491.9A CN116563327A (en) 2023-05-17 2023-05-17 Dynamic scene background modeling method based on box diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310557491.9A CN116563327A (en) 2023-05-17 2023-05-17 Dynamic scene background modeling method based on box diagram

Publications (1)

Publication Number Publication Date
CN116563327A true CN116563327A (en) 2023-08-08

Family

ID=87497901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310557491.9A Pending CN116563327A (en) 2023-05-17 2023-05-17 Dynamic scene background modeling method based on box diagram

Country Status (1)

Country Link
CN (1) CN116563327A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117633573A (en) * 2024-01-26 2024-03-01 南京群顶科技股份有限公司 Electric quantity data anomaly detection method and system based on air conditioner operation working condition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117633573A (en) * 2024-01-26 2024-03-01 南京群顶科技股份有限公司 Electric quantity data anomaly detection method and system based on air conditioner operation working condition
CN117633573B (en) * 2024-01-26 2024-04-16 南京群顶科技股份有限公司 Electric quantity data anomaly detection method and system based on air conditioner operation working condition

Similar Documents

Publication Publication Date Title
WO2021208275A1 (en) Traffic video background modelling method and system
CN112308095A (en) Picture preprocessing and model training method and device, server and storage medium
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
CN111062974B (en) Method and system for extracting foreground target by removing ghost
CN109685045B (en) Moving target video tracking method and system
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
WO2006115427A1 (en) Three-dimensional road layout estimation from video sequences by tracking pedestrians
CN105809716B (en) Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method
WO2012076586A1 (en) Method and system for segmenting an image
CN110059634B (en) Large-scene face snapshot method
CN107590427B (en) Method for detecting abnormal events of surveillance video based on space-time interest point noise reduction
CN112487898A (en) Automatic judgment method, equipment and system for alignment of inlet and outlet of mixing truck in mixing plant
CN107346547A (en) Real-time foreground extracting method and device based on monocular platform
US20220083808A1 (en) Method and apparatus for processing images, device and storage medium
CN109166137A (en) For shake Moving Object in Video Sequences detection algorithm
CN116563327A (en) Dynamic scene background modeling method based on box diagram
CN107295296A (en) A kind of selectively storage and restoration methods and system of monitor video
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
Liu et al. Scene background estimation based on temporal median filter with Gaussian filtering
CN110163132A (en) A kind of correlation filtering tracking based on maximum response change rate more new strategy
CN106780544B (en) The method and apparatus that display foreground extracts
CN108520496B (en) Sea-air background monitoring video image splicing method based on optical flow method
CN107452019B (en) Target detection method, device and system based on model switching and storage medium
CN113658197A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116958880A (en) Video flame foreground segmentation preprocessing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination