WO2018068311A1 - 背景模型的提取装置、交通拥堵状况检测方法和装置 - Google Patents

背景模型的提取装置、交通拥堵状况检测方法和装置 Download PDF

Info

Publication number
WO2018068311A1
WO2018068311A1 PCT/CN2016/102156 CN2016102156W WO2018068311A1 WO 2018068311 A1 WO2018068311 A1 WO 2018068311A1 CN 2016102156 W CN2016102156 W CN 2016102156W WO 2018068311 A1 WO2018068311 A1 WO 2018068311A1
Authority
WO
WIPO (PCT)
Prior art keywords
congestion index
congestion
image
index
predetermined area
Prior art date
Application number
PCT/CN2016/102156
Other languages
English (en)
French (fr)
Inventor
张聪
王琪
Original Assignee
富士通株式会社
张聪
王琪
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 张聪, 王琪 filed Critical 富士通株式会社
Priority to CN201680087770.1A priority Critical patent/CN109479120A/zh
Priority to PCT/CN2016/102156 priority patent/WO2018068311A1/zh
Publication of WO2018068311A1 publication Critical patent/WO2018068311A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an apparatus for extracting a background model, a method and apparatus for detecting a traffic congestion condition.
  • the extraction of background images is widely used in the field of image monitoring and the like.
  • the difference between the current frame and the reference frame can be compared, thereby extracting the foreground image.
  • the reference frame may be referred to as a "background image” or represented using a "background model.”
  • the embodiment of the invention provides a method and device for extracting a background model, considering the movement of the image detection area Speed to update the background model to determine the foreground image based on the updated background model, thereby being robust under different video conditions, and improving the accuracy of foreground image detection, avoiding due to foreground integration and foreground locking The problem of low accuracy of foreground image detection.
  • the embodiment of the invention provides a method and a device for detecting a traffic congestion condition, which detects whether traffic congestion occurs according to the density of the vehicle and the congestion index related to the moving speed of the vehicle, thereby improving the accuracy of detecting the traffic congestion condition. It has good robustness and strong noise tolerance.
  • an apparatus for extracting a background model comprising:
  • a first detecting unit configured to detect a moving speed of a predetermined area of the current image
  • a first updating unit configured to update the background model by using the image pixels that are not moving in the predetermined area as background pixels when the moving speed exceeds the first threshold.
  • a traffic congestion condition detecting apparatus comprising:
  • a first extracting unit configured to extract a foreground image from a predetermined area of the current image according to the background model
  • a calculation unit configured to calculate a congestion index according to a first congestion index and a second congestion index that are negatively correlated with a moving speed of the predetermined region;
  • the second congestion index is a foreground pixel of the foreground image in the predetermined region a ratio of the number to the number of pixels of the current image in the predetermined area;
  • a first detecting unit is configured to determine a traffic congestion condition according to the congestion index.
  • a method for detecting a traffic congestion condition comprising:
  • the second congestion index is a number of foreground pixels of the foreground image in the predetermined region and the current image is a ratio of the number of pixels in the predetermined area;
  • the traffic congestion condition is determined based on the congestion index.
  • the beneficial effects of the embodiments of the present invention are that the background model extraction method and apparatus of the present embodiment are robust under different video conditions, and can improve the accuracy of foreground image detection, avoiding foreground integration and foreground locks.
  • the beneficial effects of the embodiments of the present invention are: the method and apparatus for detecting traffic congestion conditions according to the embodiment, It can improve the accuracy of traffic congestion detection, and has good robustness and strong noise tolerance.
  • 1 is a flow chart of a method for extracting a background model in the first embodiment
  • FIG. 2 is a flow chart of a method for detecting a traffic congestion situation in the second embodiment
  • FIG. 3 is a schematic diagram showing the relationship between the first congestion index and the moving speed curve in the second embodiment
  • FIG. 4 is a schematic diagram showing the relationship between the weighting coefficient of the first congestion index and the weighting coefficient of the second congestion index and the second congestion index in the second embodiment
  • Figure 5 is a block diagram showing the structure of an extracting device of the background model in the third embodiment
  • FIG. 6 is a schematic diagram showing the hardware configuration of an extracting device of a background model in the third embodiment
  • Figure 7 is a block diagram showing the structure of a traffic congestion detecting device in the fourth embodiment.
  • Fig. 8 is a block diagram showing the hardware configuration of the traffic congestion detecting apparatus in the fourth embodiment.
  • the image monitoring scene in the traffic field is taken as an example for description, but the present invention is not limited thereto, and may be applied to other scenarios.
  • the first embodiment provides a method for extracting a background model.
  • FIG. 1 is a flowchart of a method for extracting a background model according to Embodiment 1, as shown in FIG. 1, the method includes:
  • Step 101 Detect a moving speed of a predetermined area of the current image
  • Step 102 When the moving speed exceeds the first threshold, update the background model by using the image pixels that are not moving in the predetermined area as background pixels.
  • the background model is updated in consideration of the moving speed of the predetermined area to determine the foreground image according to the updated background model, thereby being robust under different video conditions and improving the accuracy of foreground image detection. Degree, avoiding the problem of low detection accuracy of foreground images due to foreground integration and foreground locking.
  • the current image is a video image, which can be obtained by extracting a certain frame image in the surveillance video, and the surveillance video can be obtained by a camera mounted on a monitoring area such as a road.
  • a background model may be pre-generated by using the prior art, for example, using a Gaussian mixture model, a Gaussian model, a Kalman filter or other statistical model, or a vibe algorithm; then the background model may use step 101.
  • the method in -102 is continually updated, or a background model can be generated using the methods in steps 101-102, and the background model is continually updated using the methods in steps 101-102 for different current images.
  • the initial background model M 0 is then continuously updated to obtain a background model M 1 , M 2 , . . . , M i , i is a positive integer.
  • Steps 101 to 102 illustrate the case where the background model is updated once.
  • the updated background model obtained in step 102 for the current image is M j
  • the image to be processed for the next frame is updated by using steps 101-102.
  • the background model is M j+1 , where j is 0 or any positive integer.
  • Figure 1 illustrates an update as an example. In actual implementation, multiple background models can be updated.
  • the foreground image may be extracted from the next frame input image according to the updated background model, thereby improving the accuracy of foreground image detection and avoiding foreground integration and foreground locks.
  • step 101 the moving speed of the predetermined area of the current image may be detected by using the prior art, for example, by using an optical flow method.
  • a velocity vector is given to each pixel of a predetermined area in the current image to form an image motion field, and the current image predetermined area can be dynamically analyzed according to the velocity vector feature of each pixel.
  • the optical flow vector is stationary throughout the image area.
  • there is a moving object in the predetermined area of the current image there is relative motion between the foreground and the background, and the velocity vector formed by the moving object is different from the velocity vector of the neighborhood background, thereby obtaining the moving speed of the predetermined region, for example,
  • the average moving speed of the predetermined area is taken as the moving speed of the predetermined area.
  • the predetermined area may be a road area, and when the predetermined area is a road area, tracking detection may be performed on the vehicle on the road, so that the average moving speed of the vehicle is taken as the moving speed of the predetermined area, wherein the predetermined area is The moving speed does not mean that the predetermined area itself is moving, but only the average moving speed of the vehicle (object) in the predetermined area is defined as the moving speed of the predetermined area.
  • step 102 when the moving speed of the predetermined area exceeds the first threshold, the image pixels that are not moving in the predetermined area are used as background pixels. Update the background model. Thereby, when the foreground image extraction operation is performed again according to the updated background model, the problem of foreground lock can be avoided, and since the moving speed exceeds the first threshold, there is no problem that the foreground is integrated into the background model.
  • the method may further include: (not shown) performing motion detection on the predetermined area to determine a moving image and the non-moving image, wherein the moving image and the non-moving image are The predetermined area is used as a reference.
  • the predetermined area is a road area
  • the moving image may be a vehicle
  • the non-moving image may be a road surface on a road, a road sign or a signal light, etc.
  • the motion detection may adopt an existing
  • the technique performs, for example, motion detection using methods such as frame difference or structure tensor or background subtraction.
  • the motion detecting step detects an image that is not moving, and The detected non-moving image pixels are updated as background pixels into the background model.
  • the foregoing steps 101-102 may be performed for each frame in the video image sequence, and the foregoing steps 101-102 may be performed for the image of the second predetermined number of frames in the video image sequence. This is not a limitation.
  • the first threshold and the second predetermined number may be determined according to requirements, and the embodiment is not limited thereto.
  • the method in order to avoid misdetection of the foreground, the method may further include:
  • Step 103 Optional, performing edge detection on the predetermined area or performing edge detection on the foreground image in the predetermined area, and updating the background model by using image pixels of the area without the edge as background pixels.
  • the edge detection can be performed by using the prior art, for example, using a Sobel or Canny operator to detect the edge region, but the embodiment is not limited thereto.
  • the edge detection may be performed on the predetermined area, or the foreground extraction may be performed on the predetermined area in advance, and the edge image is detected on the extracted foreground image.
  • This embodiment is not limited thereto, and the predetermined area may be The previously obtained background models are compared to extract foreground images.
  • the method for foreground extraction can refer to the prior art, and details are not described herein again.
  • an edge of the vehicle can be detected compared to a road (background), and if an area composed of image pixel dots having no edge is detected, Indicates that the area should not be regarded as a foreground image, but should be regarded as a background image. Therefore, in step 103, edge detection of the foreground image is performed, and when there is an area having no edge in the foreground image, the edge is not provided.
  • the image pixels of the region are added to the background model to update the background model, and the foreground image of the next frame image is extracted according to the updated background model, thereby preventing other objects outside the vehicle from being erroneously detected as foreground.
  • step 103 may also be performed before step 101 or concurrently with step 101. This embodiment is not limited thereto.
  • the background model is updated in consideration of the moving speed of the predetermined area to determine the foreground image based on the updated background model, thereby being robust under different video conditions, and capable of improving foreground image detection.
  • the second embodiment provides a traffic congestion condition detecting method
  • FIG. 2 is the traffic congestion state of the second embodiment.
  • a flow chart of the detection method, as shown in FIG. 2, the method includes:
  • Step 201 Extract a foreground image from a predetermined area of the current image according to the background model
  • Step 202 Calculate a congestion index according to a first congestion index and a second congestion index that are negatively correlated with a moving speed of the current predetermined region;
  • the second congestion index is a number of foreground pixels of the foreground image in the predetermined region. a ratio of the number of pixels of the current image in the predetermined area;
  • Step 203 Determine a traffic congestion condition according to the congestion index.
  • the current image is a video image, which can be obtained by extracting a certain frame image in the monitoring video, and the monitoring video can be obtained by a camera mounted on a monitoring area (such as a road), and the predetermined area can be Road (road) image in a video image.
  • the relevant congestion index is used to detect whether traffic congestion has occurred, thereby improving the accuracy of traffic congestion detection, robustness and strong noise tolerance.
  • the foreground image may be determined according to the background model extracted by the method in Embodiment 1, for example, the current image may be a binarized image, the pixel value of the foreground pixel is 1, and the pixel value of the background pixel is 0. Comparing the current image predetermined area with the background model updated according to the image of the previous frame, and setting the pixel value of the significantly different pixel to 1, that is, the pixel is the foreground pixel to extract the foreground image, or may also be based on other The prior art extracts the foreground image and will not be described here.
  • the first congestion index I V and the second congestion index I f may be processed to obtain a congestion index I, and the congestion index is used to evaluate the moving speed of the vehicle and the vehicle density to the traffic congestion state. Impact.
  • step 203 when the congestion index is greater than or equal to the second threshold, it is determined that the traffic congestion condition is congestion.
  • the time when the congestion index is greater than or equal to the second threshold and greater than or equal to the second threshold When the third threshold is greater than or equal to, the traffic congestion condition is determined to be congestion, so as to improve the accuracy of the detection result.
  • the second threshold and the third threshold may be determined according to actual conditions, and details are not described herein again.
  • the method may further include the step of detecting a moving speed of the predetermined area of the current image.
  • the specific detection method is as described in Embodiment 1, and details are not described herein again.
  • the method may further include the steps of: calculating the first congestion index according to the moving speed; and/or calculating the second congestion index.
  • the first congestion index is negatively correlated with the moving speed, that is, the faster the moving speed is, the smaller the first congestion index is, the lower the congestion degree is; the slower the moving speed, the larger the first congestion index is. , indicating the higher the degree of congestion.
  • the first congestion index may be a negative exponential function of the speed of movement.
  • I V exp(-cV), where c is a constant, which can be based on road conditions and monitoring video. The conditions are determined, and the embodiment is not limited thereto.
  • the first congestion index may also be a reciprocal of the moving speed, or other negative correlation function, which is not limited by this embodiment.
  • the second congestion index is a ratio of the number of foreground pixels of the foreground image in the predetermined area to the number of pixels of the current image in the predetermined area, and is expressed by the following formula:
  • I f represents a second congestion index
  • num represents a number of foreground pixels of a foreground image in a predetermined area (eg, a detection area)
  • height and width represent height and width of the current image in the predetermined area
  • height ⁇ width represents The number of pixels of the current image in the predetermined area.
  • the method may further include the steps of: determining the first congestion index and the second congestion index.
  • the weighting coefficients w V and w f are based on the weighting coefficients w V and w f .
  • the congestion degree of the road mainly depends on the influence of the second congestion index, which is less affected by the first congestion index; and the second congestion index is high.
  • the degree of congestion of the road mainly depends on the influence of the first congestion index, and is less affected by the second congestion index. This can be used to determine the weighting coefficients as follows:
  • the second congestion index I f is greater than or equal to the fourth threshold, determining that the weighting coefficient w f of the second congestion index is less than or equal to the weighting coefficient w V of the first congestion index; and the second congestion index I f is smaller than the first
  • the threshold is four, the weighting coefficient w f of the second congestion index is determined to be greater than the weighting coefficient w V of the first congestion index.
  • the weighting coefficient w f of the second congestion index is positively correlated with the second congestion index I f
  • the weighting coefficient w V of the first congestion index is negatively correlated with the second congestion index I f
  • the weighting coefficient w f of the second congestion index may be a sigmoid function of the second congestion index.
  • FIG. 4 is a schematic diagram showing the relationship between the weighting coefficient w V of the first congestion index and the weighting coefficient w f of the second congestion index and the second congestion index I f in the present embodiment, as shown in FIG. 4, the second congestion index is shown in FIG.
  • the weighting factor is expressed as:
  • c 1 and c 2 are constants of the sigmoid function, which can be determined according to actual needs, and the embodiment is not limited thereto.
  • the method may further include: (not shown) updating the background model in step 201, wherein when the moving speed exceeds the first threshold, the image pixels that are not moving in the predetermined area are used as background pixels. Updating the background model; and/or performing edge detection on the predetermined area or performing edge detection on the foreground image in the predetermined area, and updating the background model as the background pixel of the image pixel of the area having no edge, in processing In the next frame of the video image, the foreground image is extracted using the updated background model.
  • congestion detection is performed every predetermined number of (p) frames, the current frame is the ith frame, and the ith frame image is used as the current image, and the ith frame image is extracted according to the background model M i .
  • the foreground image of the predetermined area is subjected to congestion detection according to the methods in steps 202 and 203, and the background model M i used in step 201 is updated according to the method in steps 101-102 to obtain the updated background model M i+1 .
  • the foreground image of the predetermined region of the i+p frame image is extracted according to the updated background model M i+1 , and the congestion detection is performed according to the methods in steps 202 and 203, and according to The method in steps 101-102 updates the background model M i+1 again , obtains the updated background model M i+2 , and so on. Therefore, the accuracy of the foreground image extraction is improved, and the accuracy of the congestion detection is further improved.
  • Embodiment 1 refer to Embodiment 1, which is not repeated here.
  • the traffic congestion is detected according to the density of the vehicle and the congestion index related to the moving speed of the vehicle, thereby improving the accuracy of the detection of the traffic congestion condition, and the robustness is good and has Strong noise tolerance.
  • the embodiment 3 also provides a background model extraction device.
  • the principle of solving the problem is similar to the method in the first embodiment. Therefore, the specific implementation may refer to the implementation of the method in the embodiment 1, and the content is the same. No, repeat the description.
  • FIG. 5 is a schematic diagram showing the structure of the extraction device of the background model in the third embodiment. As shown in FIG. 5, the extraction device 500 of the background model includes:
  • a first detecting unit 501 configured to detect a moving speed of a predetermined area of the current image
  • the first updating unit 502 is configured to update the background model by using the image pixels that are not moving in the predetermined area as background pixels when the moving speed exceeds the first threshold.
  • the background model is updated in consideration of the moving speed of the predetermined area to determine the foreground image based on the updated background model, thereby being robust under different video conditions, and capable of improving foreground image detection.
  • the apparatus 500 may further include:
  • the second detecting unit 503 is configured to perform motion detection on the predetermined area to determine an image of the motion and the image that is not moving.
  • the apparatus 500 may further include:
  • a second updating unit 504 for performing edge detection on the predetermined area or performing edge detection on the foreground image in the predetermined area, and updating the background model as an image pixel of the area having no edge .
  • the specific implementation manners of the first detecting unit 501, the first updating unit 502, the second detecting unit 503, and the second updating unit 504 may refer to steps 101-103 in Embodiment 1, and the repeated description will not be repeated. .
  • FIG. 6 is a schematic diagram showing the hardware configuration of the extraction device of the background model according to the embodiment of the present invention.
  • the background model extraction device 600 may include: an interface (not shown), a central processing unit (CPU) 620, Memory 610 and transceiver 640; memory 610 is coupled to central processor 620.
  • the memory 610 can store various data; in addition, the extracted program of the background model is stored, and the program is executed under the control of the central processing unit 620, and various preset values, predetermined conditions, and the like are stored.
  • the functionality of the background model's extraction device 600 can be integrated into the central processor 620.
  • the central processing unit 620 can be configured to: detect a moving speed of a predetermined area of the current image; When the moving speed exceeds the first threshold, the background image is updated as the background pixel of the image pixel that is not moving in the predetermined area.
  • the central processing unit 620 can also be configured to perform motion detection on the predetermined area to determine a moving image and the non-moving image.
  • the central processing unit 620 may be further configured to: perform edge detection on the predetermined area or perform edge detection on the foreground image in the predetermined area, and update the background model by using image pixels of the area without the edge as background pixels.
  • Embodiment 1 For a specific implementation of the central processing unit 620, reference may be made to Embodiment 1 and is not repeated here.
  • the extraction device 600 of the background model may be disposed on a chip (not shown) connected to the central processing unit 620, and the background model extraction device is implemented by the control of the central processing unit 620. 600 features.
  • the device 600 does not necessarily include all of the components shown in FIG. 6; in addition, the device 600 may also include components not shown in FIG. 6, and reference may be made to the prior art.
  • the background model is updated in consideration of the moving speed of the predetermined area to determine the foreground image based on the updated background model, thereby being robust under different video conditions, and capable of improving foreground image detection.
  • the embodiment 4 of the present invention provides a traffic congestion condition detecting device.
  • the principle of the device is similar to the method in the second embodiment. Therefore, the specific implementation may refer to the implementation of the method in the second embodiment. No, repeat the description.
  • FIG. 7 is a schematic diagram showing the structure of a traffic congestion condition detecting apparatus in the embodiment. As shown in FIG. 7, the traffic congestion status detecting 700 includes:
  • a first extracting unit 701 configured to extract a foreground image from a predetermined area of the current image according to the background model
  • a calculating unit 702 configured to calculate a congestion index according to a first congestion index and a second congestion index that are negatively correlated with a moving speed of the predetermined region;
  • the second congestion index is a foreground pixel of the foreground image in a predetermined region a ratio of the number to the number of pixels of the current image in the predetermined area;
  • the third detecting unit 703 is configured to determine a traffic congestion condition according to the congestion index.
  • whether or not traffic congestion occurs is detected according to the density of the vehicle and the congestion index related to the moving speed of the vehicle, thereby improving the accuracy of the detection of the traffic congestion condition, and the robustness is better and more Strong noise tolerance.
  • the calculating unit 702 is configured to perform weighting processing on the first congestion index and the second congestion index to obtain the congestion index
  • the third detecting unit 703 is configured to: when the congestion index is greater than or equal to the second threshold, The traffic congestion condition is determined to be congested; or the third detecting unit 703 is configured to determine that the traffic congestion condition is congested when the congestion index is greater than or equal to the second threshold and the time greater than or equal to the second threshold is greater than or equal to the third threshold.
  • the first congestion index is a negative exponential function of the moving speed.
  • the apparatus 700 may further include:
  • the fourth detecting unit 704 is configured to detect a moving speed of the predetermined area of the current image.
  • a first index calculation unit (not shown) for calculating the first congestion index according to the moving speed
  • a second index calculation unit (not shown) for calculating the second congestion index.
  • a coefficient determining unit (not shown), configured to determine, when the second congestion index is greater than or equal to a fourth threshold, a weighting coefficient of the second congestion index is less than or equal to a weighting coefficient of the first congestion index; and the second congestion When the index is less than the fourth threshold, determining that the weighting coefficient of the second congestion index is greater than the weighting coefficient of the first congestion index;
  • the weighting coefficient of the second congestion index is positively correlated with the second congestion index, and the weighting coefficient of the first congestion index is negatively correlated with the second congestion index.
  • the weighting coefficient of the second congestion index may be a sigmoid function of the second congestion index.
  • the apparatus 700 may further include:
  • the first background model extraction unit 705 is configured to update the background model by using the image pixels that are not moving in the predetermined area as background pixels when the moving speed exceeds the first threshold.
  • the second background model extracting unit 706 is configured to perform edge detection on the predetermined area or perform edge detection on the foreground image in the predetermined area, and update the background model by using image pixels of the area having no edge as the background pixels.
  • the specific implementation manners of the first extraction unit 701, the calculation unit 702, and the third detection unit 703 may refer to steps 201-203 in the second embodiment, the first background model extraction unit 705 and the second background.
  • the model extraction unit 706 reference may be made to the repetition of steps 102-103 in Embodiment 1 for details.
  • FIG. 8 is a schematic diagram showing the hardware configuration of the traffic congestion state detecting apparatus according to the embodiment of the present invention.
  • the traffic congestion state detecting apparatus 800 may include: an interface (not shown), a central processing unit (CPU) 820, Memory 810 and transceiver 840; memory 810 is coupled to central processor 820.
  • the memory 810 can store various data; in addition, a program for detecting traffic congestion conditions is stored, and the program is executed under the control of the central processing unit 820, and various preset values, predetermined conditions, and the like are stored.
  • the functionality of the traffic congestion condition detection device 800 can be integrated into the central processor 820.
  • the central processing unit 820 may be configured to: extract a foreground image from a predetermined area of the current image according to the background model; calculate a congestion index according to the first congestion index and the second congestion index that are negatively correlated with the moving speed of the predetermined area.
  • the second congestion index is a ratio of the number of foreground pixels of the foreground image in the predetermined area to the number of pixels of the current image in the predetermined area; the traffic congestion condition is determined according to the congestion index.
  • the central processing unit 820 can be configured to perform weighting processing on the first congestion index and the second congestion index to obtain the congestion index.
  • the first congestion index is a negative exponential function of the moving speed.
  • the weighting coefficient of the second congestion index is positively correlated with the second congestion index, and the weighting coefficient of the first congestion index is negatively correlated with the second congestion index.
  • the central processing unit 820 is further configured to: when the congestion index is greater than or equal to the second threshold, determine that the traffic congestion condition is congestion; or
  • the traffic congestion condition is determined to be congestion.
  • the central processing unit 820 is further configured to detect a moving speed of the predetermined area of the current image.
  • the central processing unit 820 may be further configured to: when the moving speed exceeds the first threshold, update the background model by using the image pixels that are not moving in the predetermined area as background pixels; and/or, the current image
  • the predetermined area performs edge detection or edge detection of the foreground image in the predetermined area, and the image model of the area having no edge is used as a background to update the background model.
  • Embodiment 2 For a specific implementation of the central processing unit 820, reference may be made to Embodiment 2, which is not repeated here.
  • the traffic congestion condition detecting device 800 may be disposed on a chip (not shown) connected to the central processing unit 820, and the traffic congestion state detecting device may be implemented by the control of the central processing unit 820. 800 features.
  • the device 800 does not necessarily include all of the components shown in FIG. 8; in addition, the device 800 may also include components not shown in FIG. 8, and reference may be made to the prior art.
  • whether or not traffic congestion occurs is detected according to the density of the vehicle and the congestion index related to the moving speed of the vehicle, thereby improving the accuracy of the detection of the traffic congestion condition, and the robustness is better and more Strong noise tolerance.
  • the embodiment of the present invention further provides a computer readable program, wherein when the program is executed in an extracting device of a background model, the program causes the computer to execute the background model as in the above embodiment 1 in the extracting device of the background model Extraction Method.
  • the embodiment of the present invention further provides a storage medium storing a computer readable program, wherein the computer readable program causes the computer to execute the extraction method of the background model in Embodiment 1 above in the extraction device of the background model.
  • the embodiment of the present invention further provides a computer readable program, wherein when the program is executed in a traffic congestion condition detecting device, the program causes the computer to execute a traffic congestion condition as in Embodiment 2 above in the traffic congestion condition detecting device Detection method.
  • the embodiment of the present invention further provides a storage medium storing a computer readable program, wherein the computer readable program causes the computer to execute the traffic congestion condition detecting method in Embodiment 2 above in a traffic congestion condition detecting device.
  • the above apparatus and method of the present invention may be implemented by hardware or by hardware in combination with software.
  • the present invention relates to a computer readable program that, when executed by a logic component, enables the logic component to implement the apparatus or components described above, or to cause the logic component to implement the various methods described above Or steps.
  • the present invention also relates to a storage medium for storing the above program, such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like.
  • the extraction method of the background model performed in the extraction device of the background model described in connection with the embodiment of the present invention or the traffic congestion condition detection method executed in the traffic congestion condition detection device may be directly embodied as hardware, a software module executed by the processor, or The combination of the two.
  • one or more of the functional blocks shown in Figures 5-8 and/or one or more combinations of functional blocks may correspond to various software modules of a computer program flow, or to individual hardware modules.
  • These software modules can correspond to the various steps shown in Figures 1-2, respectively.
  • These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the software module may be stored in a memory of the image forming apparatus or in a memory card insertable to the image forming apparatus.
  • One or more of the functional block diagrams described with respect to Figures 5-8 and/or one or more combinations of functional block diagrams may be implemented as a general purpose processor, digital signal processor (DSP) for performing the functions described herein.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • One or more of the functional block diagrams described with respect to Figures 5-8 and/or one or more combinations of functional block diagrams may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种背景模型的提取装置、一种交通拥堵状况检测方法和装置,其中,该背景模型的提取装置包括:第一检测单元,其用于检测当前图像预定区域的移动速度;第一更新单元,其用于在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型。由此,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。

Description

背景模型的提取装置、交通拥堵状况检测方法和装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种背景模型的提取装置、交通拥堵状况检测方法和装置。
背景技术
随着城市的发展和生活水平的提高,车辆数量逐年增多,交通拥堵问题也日益突出。交通拥堵造成了各种资源的极大浪费并带来了严重的污染问题。对交通拥堵状况进行检测,从而交通管理部门能够及时了解交通拥堵状况并采取有效措施进行控制,是解决交通拥堵状况的重要手段之一。
背景图像的提取被广泛应用在图像监控等领域。例如交通拥堵状况检测领域,在检测视频中的前景图像时,可以比较当前帧和参考帧的差别,由此来提取前景图像。其中,参考帧可以被称为“背景图像”或者使用“背景模型”进行表示。
目前已经有一些方法来进行背景模型的提取,例如帧差别法(Frame differencing),均值过滤法(Mean filter)以及背景混合模型法(Background Mixture Model)。
应该注意,上面对技术背景的介绍只是为了方便对本发明的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本发明的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
发明内容
传统的背景模型的提取的准确度受到光照变化和相机抖动的影响,另外,传统的方法由于背景模型的提取的不合适,通常会出现物体由于长时间静止而融入背景无法被作为前景图像提取出来的问题,或者在静止的物体被错误地提取为前景图像时,将永久作为前景图像而出现前景锁死的问题,从而导致前景检测准确度降低。
另外,采用现有的方法来检测交通拥堵状况时,可能仅考虑了检测车辆的多少,导致检测结果不准确。
本发明实施例提出了一种背景模型的提取方法和装置,考虑图像检测区域的移动 速度来更新背景模型,以便根据更新后的背景模型确定前景图像,由此,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
本发明实施例提出了一种交通拥堵状况检测方法和装置,根据车辆的密度和与车辆的移动速度相关的拥堵指数来检测是否发生了交通拥堵,由此能够提高交通拥堵状况检测的准确度,鲁棒性较好且具有较强的噪声容忍度。
本发明实施例的上述目的是通过如下技术方案实现的:
根据本发明实施例的第一个方面,提供了一种背景模型的提取装置,该装置包括:
第一检测单元,其用于检测当前图像预定区域的移动速度;
第一更新单元,其用于在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型。
根据本发明实施例的第二个方面,提供了一种交通拥堵状况检测装置,该装置包括:
第一提取单元,其用于根据背景模型从当前图像预定区域中提取出前景图像;
计算单元,其用于根据与该预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;该第二拥堵指数是该前景图像在该预定区域中的前景像素的数量与该当前图像在该预定区域中的像素的数量的比值;
第一检测单元,其用于根据该拥堵指数来确定交通拥堵状况。
根据本发明实施例的第三个方面,提供了一种交通拥堵状况检测方法,该方法包括:
根据背景模型从当前图像预定区域中提取出前景图像;
根据与该预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;该第二拥堵指数是该前景图像在该预定区域中的前景像素的数量与该当前图像在该预定区域中的像素的数量的比值;
根据该拥堵指数来确定交通拥堵状况。
本发明实施例的有益效果在于,通过本实施例的背景模型的提取方法和装置,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
本发明实施例的有益效果在于,通过本实施例的交通拥堵状况检测方法和装置, 能够提高交通拥堵状况检测的准确度,鲁棒性较好且具有较强的噪声容忍度。
参照后文的说明和附图,详细公开了本发明的特定实施方式,指明了本发明的原理可以被采用的方式。应该理解,本发明的实施方式在范围上并不因而受到限制。在所附权利要求的条款的范围内,本发明的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在,但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
在本发明实施例的一个附图或一种实施方式中描述的元素和特征可以与一个或更多个其它附图或实施方式中示出的元素和特征相结合。此外,在附图中,类似的标号表示几个附图中对应的部件,并可用于指示多于一种实施方式中使用的对应部件。
所包括的附图用来提供对本发明实施例的进一步的理解,其构成了说明书的一部分,用于例示本发明的实施方式,并与文字描述一起来阐释本发明的原理。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1是本实施例1中背景模型的提取方法流程图;
图2是本实施例2中交通拥堵情况检测方法流程图;
图3是本实施例2中第一拥堵指数和移动速度曲线关系示意图;
图4是本实施例2中第一拥堵指数的加权系数和第二拥堵指数的加权系数与第二拥堵指数的曲线关系示意图;
图5是本实施例3中背景模型的提取装置构成示意图;
图6是本实施例3中背景模型的提取装置的硬件构成示意图;
图7是本实施例4中交通拥堵情况检测装置构成示意图;
图8是本实施例4中交通拥堵情况检测装置的硬件构成示意图。
具体实施方式
参照附图,通过下面的说明书,本发明的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本发明的特定实施方式,其表明了其中可以采用本发明的原则的部分实施方式,应了解的是,本发明不限于所描述的实施方式,相反,本发明包括落入所附权利要求的范围内的全部修改、变型以及等同物。下面结合附图对本发明的各种实施方式进行说明。这些实施方式只是示例性的,不是对本发明的限制。
在本实施例中,将以交通领域的图像监控场景为例进行说明,但本发明不限于此,还可以应用到其他的场景中。
实施例1
本实施例1提供一种背景模型的提取方法。
图1是本实施例1的背景模型的提取方法流程图,如图1所示,该方法包括:
步骤101,检测当前图像预定区域的移动速度;
步骤102,在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型。
由上述实施例可知,考虑预定区域的移动速度来更新背景模型,以便根据更新后的背景模型确定前景图像,由此,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
在本实施例中,该当前图像是视频图像,可以通过提取监控视频中的某一帧图像而获得,该监控视频可以通过安装于监测区域(如道路)上的摄像头获得。
在本实施例中,可以采用现有技术而预先生成一个背景模型,例如,采用高斯混合模型、高斯模型、卡尔曼滤波器或其他统计模型、或vibe算法生成;然后该背景模型可以使用步骤101-102中的方法不断地被更新,或者可以使用步骤101-102中的方法生成一个背景模型,并针对不同的当前图像,使用步骤101-102中的方法不断的更新该背景模型。例如,初始背景模型M0,然后不断地对该背景模型进行更新得到背景模型M1,M2,……,Mi,i为正整数。
步骤101至步骤102说明了对背景模型进行一次更新的情况,针对当前图像在步骤102中得到的更新后的背景模型为Mj,针对下一帧要处理的图像,使用步骤101-102更新得到的背景模型为Mj+1,其中j为0或任意正整数;图1以进行一次更新为例进 行说明,在实际实现时可以进行多次背景模型的更新。
在本实施例中,在步骤102更新了背景模型后,可以根据该更新的背景模型从下一帧输入图像中提取前景图像,这样可以提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
在本实施例中,在步骤101中,可以采用现有技术检测当前图像预定区域的移动速度,例如,采用光流法。
在采用光流法进行检测时,给当前图像中预定区域的每一个像素赋予一个速度矢量,形成了图像运动场,根据各个像素的速度矢量特征,可以对该当前图像预定区域进行动态分析。当该当前图像预定区域中没有移动物体时,光流矢量在整个图像区域是静止的。当该当前图像预定区域中有移动物体时,前景和背景存在相对运动,移动物体所形成的速度矢量和邻域背景的速度矢量不同,由此可获取该预定区域的移动速度,例如可将该预定区域的平均移动速度作为该预定区域的移动速度。
其中,该预定区域可以是道路区域,在该预定区域是道路区域时,还可以对道路上的车辆进行跟踪检测,从而将车辆的平均移动速度作为该预定区域的移动速度,其中,预定区域的移动速度并不是指预定区域本身是移动的,而只是将预定区域内车辆(物体)的平均移动速度定义为该预定区域的移动速度。
发明人通过研究发现,在车辆移动速度很快时,表示车辆在道路上的行驶速度很快,没有车辆停止在道路上,道路上不会有拥堵状态下出现的大量车辆在道路上滞留的场景,不需要考虑更新背景模型时车辆会融入背景模型的问题,因此,在步骤102中,在该预定区域的移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型。由此,在根据更新后的背景模型再次进行前景图像的提取操作时,能够避免前景锁死的问题,且由于移动速度超过第一阈值,因此,不会出现前景融入背景模型的问题。
在本实施例中,该方法还可以包括:(未图示)对该预定区域进行运动检测,以确定运动的图像和该不运动的图像,其中,该运动的图像和不运动的图像是以该预定区域作为参照的,例如,在该预定区域是道路区域时,运动的图像可以是车辆,不运动的图像可以是道路上的路面、路标或信号灯等,其中,该运动检测可以采用现有技术执行,例如,可以利用帧差或者结构张量或者背景减除等方法来进行运动检测。在步骤102中移动速度超过第一阈值时,通过该运动检测步骤检测出不运动的图像,将 该检测出的不运动的图像像素作为背景像素,更新到该背景模型中。
在本实施例中,可以针对视频图像序列中的每一帧都执行上述步骤101-102,也可以针对视频图像序列中每隔第二预定数量帧的图像执行上述步骤101-102,本实施例并不以此作为限制。
其中,该第一阈值,第二预定数量可以根据需要确定,本实施例并不以此作为限制。在本实施例中,为了避免前景误检,该方法还可以包括:
步骤103(可选),对该预定区域进行边缘检测或对该预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新该背景模型。在本实施例中,可采用现有技术进行边缘检测,例如采用苏贝尔(Sobel)或Canny算子来检测边缘区域,但本实施例并不以此作为限制。
其中,可以对该预定区域进行边缘检测,或者可以预先对该预定区域进行前景提取,并对提取后的前景图像进行边缘检测,本实施例并不以此作为限制,其中,可以将预定区域与之前获得的背景模型进行比较,以提取前景图像,该前景提取的方法可以参考现有技术,此处不再赘述。
在本实施例中,通常,作为道路上的车辆(前景图像)相比于道路(背景)来说,能够检测出车辆的边缘,而如果检测到不具有边缘的图像像素点构成的区域时,表示该区域不应该被视为前景图像,而应该视为背景图像,因此,在步骤103中,进行前景图像的边缘检测,在该前景图像中存在不具有边缘的区域时,将该不具有边缘的区域的图像像素加入到该背景模型中,以更新该背景模型,根据更新后的背景模型来提取下一帧图像的前景图像,从而避免车辆外的其他物体被误检测为前景。
在本实施例中,步骤103也可以在步骤101前、或者与步骤101同时执行,本实施例并不以此作为限制。
通过本实施例的上述方法,考虑预定区域的移动速度来更新背景模型,以便根据更新后的背景模型确定前景图像,由此,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
实施例2
本实施例2提供一种交通拥堵状况检测方法,图2是本实施例2的交通拥堵状况 检测方法流程图,如图2所示,该方法包括:
步骤201,根据背景模型从当前图像预定区域中提取出前景图像;
步骤202,根据与该当前预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;该第二拥堵指数是该前景图像在该预定区域中的前景像素的数量与该当前图像在该预定区域中的像素的数量的比值;
步骤203,根据该拥堵指数来确定交通拥堵状况。
在本实施例中,该当前图像是视频图像,可以通过提取监控视频中的某一帧图像而获得,该监控视频可以通过安装于监测区域(如道路)上的摄像头获得,该预定区域可以是视频图像中的道路(路面)图像。
发明人在研究中发现,道路上车辆的移动速度和车辆的密度影响了道路的拥堵情况,因此,在本实施例中,为了检测道路的拥堵情况,可以根据车辆的密度和与车辆的移动速度相关的拥堵指数来检测是否发生了交通拥堵,由此能够提高交通拥堵状况检测的准确度,鲁棒性较好且具有较强的噪声容忍度。
在本实施例中,该前景图像可以根据实施例1中的方法提取的背景模型确定,例如,该当前图像可以是二值化图像,前景像素的像素值为1,背景像素的像素值为0,将该当前图像预定区域和根据之前帧的图像更新的背景模型进行比较,将明显不同的像素的像素值设为1,即该像素为前景像素,以提取出前景图像,或者也可以根据其他现有技术提取前景图像,此处不再赘述。
在本实施例中,车辆移动速度越快,道路越通畅,拥堵程度越低,否则道路的拥堵程度越高;此外,车辆密度越小(在预定区域中前景像素的数量少)时,道路越畅通,拥堵程度越低;否则拥堵程度越高。因此,可以使用预定区域的移动速度呈负相关的第一拥堵指数、以及前景图像的密度来评价交通拥堵状况。
因此,在步骤202中,可对该第一拥堵指数IV和该第二拥堵指数If进行处理,以获得拥堵指数I,用该拥堵指数来评价车辆的移动速度和车辆密度对交通拥堵状况的影响。例如,可对该第一拥堵指数和该第二拥堵指数进行加权处理来得到该拥堵指数,该拥堵指数I=wVIV+wfIf,其中,wV和wf分别为该第一拥堵指数和第二拥堵指数的加权系数。
在步骤203中,在该拥堵指数大于等于第二阈值时,确定交通拥堵状况为拥堵。或者,在步骤203中,在该拥堵指数大于等于第二阈值且大于等于该第二阈值的时间 大于等于第三阈值时,确定该交通拥堵状况为拥堵,以提高检测结果准确性。其中该第二阈值和第三阈值可根据实际情况来定,此处不再赘述。
在本实施例中,在步骤202之前,可选地,该方法还可包括步骤:检测该当前图像预定区域的移动速度。具体检测方法如实施例1所述,此处不再赘述。
在本实施例中,在步骤202之前,可选的,该方法还可包括步骤:根据该移动速度计算该第一拥堵指数;和/或,计算该第二拥堵指数。
在本实施例中,该第一拥堵指数和该移动速度负相关,即移动速度越快,该第一拥堵指数越小,表示拥堵程度越低;移动速度越慢,该第一拥堵指数越大,表示拥堵程度越高。
在一个实施方式中,该第一拥堵指数可以是移动速度的负指数函数。
图3是本实施例2中第一拥堵指数IV和移动速度V的曲线关系图,如图3所示,IV=exp(-cV),其中c为常数,可以根据道路条件和监控视频条件确定,本实施例并不以此作为限制。
在另一个实施方式中,该第一拥堵指数还可以是移动速度的倒数,或者其他负相关函数,本实施例并不以此作为限制。
在本实施例中,该第二拥堵指数是该前景图像在预定区域中的前景像素的数量与该当前图像在该预定区域中的像素的数量的比值,采用如下公式表示:
Figure PCTCN2016102156-appb-000001
其中,If表示第二拥堵指数,num表示前景图像在预定区域(如检测区域)中的前景像素的数量,height和width表示该当前图像在该预定区域中的高和宽,height×width表示该当前图像在该预定区域中的像素的数量。
在本实施例中,在步骤202中对该第一拥堵指数和该第二拥堵指数进行加权处理来得到该拥堵指数时,该方法还可包括步骤:确定该第一拥堵指数和第二拥堵指数的加权系数wV和wf
在本实施例中,在该第二拥堵指数低于一定阈值时,道路的拥堵程度主要依赖于第二拥堵指数的影响,受该第一拥堵指数的影响较小;在该第二拥堵指数高于一定阈值时,道路的拥堵程度主要依赖于该第一拥堵指数的影响,受该第二拥堵指数的影响较小。这样可采用如下方式来确定加权系数:
在该第二拥堵指数If大于等于第四阈值时,确定该第二拥堵指数的加权系数wf小于等于该第一拥堵指数的加权系数wV;在该第二拥堵指数If小于该第四阈值时, 确定该第二拥堵指数的加权系数wf大于该第一拥堵指数的加权系数wV
其中,该第二拥堵指数的加权系数wf与该第二拥堵指数If正相关,该第一拥堵指数的加权系数wV与该第二拥堵指数If负相关。该第一拥堵指数和第二拥堵指数的加权系数之和为1,即wV+wf=1。例如,该第二拥堵指数的加权系数wf可以是该第二拥堵指数的sigmoid函数。
图4是本实施例中第一拥堵指数的加权系数wV和第二拥堵指数的加权系数wf与第二拥堵指数If的曲线关系示意图,如图4所示,该第二拥堵指数的加权系数表示为:
Figure PCTCN2016102156-appb-000002
第一拥堵指数的加权系数表示为:wV=1-wf
其中,c1,c2是sigmoid函数的常数,可以根据实际需要确定,本实施例并不以此作为限制。
在本实施例中,该方法还可以包括:(未图示)更新步骤201中的背景模型,其中,在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型;和/或,对该预定区域进行边缘检测或对该预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新该背景模型,在处理下一帧视频图像时,利用更新后的背景模型提取前景图像。
例如,针对视频图像(图像序列),每隔预定数量(p)帧进行一次拥堵检测,设当前帧为第i帧,将第i帧图像作为当前图像,根据背景模型Mi提取第i帧图像预定区域的前景图像,根据步骤202和203中的方法进行拥堵检测,并根据步骤101-102中的方法更新步骤201中使用的背景模型Mi,得到更新后的背景模型Mi+1,在经过p帧后,即针对第i+p帧,根据更新后的背景模型Mi+1提取第i+p帧图像预定区域的前景图像,根据步骤202和203中的方法进行拥堵检测,并根据步骤101-102中的方法再次更新背景模型Mi+1,得到更新后的背景模型Mi+2,以此类推。由此提高前景图像提取的准确度,进一步提高拥堵检测的准确度,其具体实现方式可以参考实施例1,此处不再重复。
由上述实施例中的方法可知,根据车辆的密度和与车辆的移动速度相关的拥堵指数来检测是否发生了交通拥堵,由此能够提高交通拥堵状况检测的准确度,鲁棒性较好且具有较强的噪声容忍度。
实施例3
本实施例3还提供了一种背景模型的提取装置,由于该装置解决问题的原理与实施例1中的方法类似,因此其具体的实施可以参考实施例1中的方法的实施,内容相同之处,不再重复说明。
图5是本实施例3中背景模型的提取装置构成示意图,如图5所示,背景模型的提取装置500包括:
第一检测单元501,其用于检测当前图像预定区域的移动速度;
第一更新单元502,其用于在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型。
通过本实施例的上述装置,考虑预定区域的移动速度来更新背景模型,以便根据更新后的背景模型确定前景图像,由此,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
在本实施例中,该装置500还可以包括:
第二检测单元503,其用于对该预定区域进行运动检测,以确定运动的图像和该不运动的图像。
在本实施例中,为了提高前景图像检测准确度,该装置500还可以包括:
第二更新单元504(可选),其用于对该预定区域进行边缘检测或对该预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新该背景模型。
在本实施例中,第一检测单元501,第一更新单元502,第二检测单元503,第二更新单元504的具体实施方式可以参考实施例1中步骤101-103,重复之处不再赘述。
图6是本发明实施例背景模型的提取装置的硬件构成示意图,如图6所示,背景模型的提取装置600可以包括:一个接口(图中未示出),中央处理器(CPU)620,存储器610和收发器640;存储器610耦合到中央处理器620。其中存储器610可存储各种数据;此外还存储背景模型的提取的程序,并且在中央处理器620的控制下执行该程序,并存储各种预设的值和预定的条件等。
在一个实施方式中,背景模型的提取装置600的功能可以被集成到中央处理器620中。其中,中央处理器620可以被配置为:检测当前图像预定区域的移动速度; 在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型。
其中,中央处理器620还可以被配置为:对该预定区域进行运动检测,以确定运动的图像和该不运动的图像。
其中,中央处理器620还可以被配置为:对该预定区域进行边缘检测或对该预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新该背景模型。
中央处理器620的具体实施方式可以参考实施例1,此处不再重复。
在另一个实施方式中,也可以将上述背景模型的提取装置600配置在与中央处理器620连接的芯片(图中未示出)上,通过中央处理器620的控制来实现背景模型的提取装置600的功能。
值得注意的是,装置600也并不是必须要包括图6中所示的所有部件;此外,该装置600还可以包括图6中没有示出的部件,可以参考现有技术。
通过本实施例的上述装置,考虑预定区域的移动速度来更新背景模型,以便根据更新后的背景模型确定前景图像,由此,在不同的视频条件下具有鲁棒性,且能够提高前景图像检测的准确度,避免由于前景融入和前景锁死导致的前景图像的检测准确度低的问题。
实施例4
本发明实施例4提供了一种交通拥堵状况检测装置,由于该装置解决问题的原理与实施例2中的方法类似,因此其具体的实施可以参考实施例2中的方法的实施,内容相同之处,不再重复说明。
图7是本实施例中交通拥堵状况检测装置构成示意图,如图7所示,交通拥堵状况检测700包括:
第一提取单元701,其用于根据背景模型从当前图像预定区域中提取出前景图像;
计算单元702,其用于根据与该预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;该第二拥堵指数是该前景图像在预定区域中的前景像素的数量与该当前图像在该预定区域中的像素的数量的比值;
第三检测单元703,其用于根据该拥堵指数来确定交通拥堵状况。
通过实施例中的上述装置,根据车辆的密度和与车辆的移动速度相关的拥堵指数来检测是否发生了交通拥堵,由此能够提高交通拥堵状况检测的准确度,鲁棒性较好且具有较强的噪声容忍度。
在本实施例中,计算单元702用于对该第一拥堵指数和该第二拥堵指数进行加权处理,得到该拥堵指数,第三检测单元703用于在该拥堵指数大于等于第二阈值时,确定该交通拥堵状况为拥堵;或者,第三检测单元703用于在该拥堵指数大于等于第二阈值且大于等于该第二阈值的时间大于等于第三阈值时,确定该交通拥堵状况为拥堵。
其中,该第一拥堵指数是该移动速度的负指数函数。
在本实施例中,该装置700还可以包括:
第四检测单元704,其用于检测该当前图像预定区域的移动速度。
第一指数计算单元(未图示),其用于根据该移动速度计算该第一拥堵指数;
第二指数计算单元(未图示),其用于计算该第二拥堵指数。
系数确定单元(未图示),其用于在该第二拥堵指数大于等于第四阈值时,确定该第二拥堵指数的加权系数小于等于该第一拥堵指数的加权系数;在该第二拥堵指数小于该第四阈值时,确定该第二拥堵指数的加权系数大于该第一拥堵指数的加权系数;
在一个实施方式中,该第二拥堵指数的加权系数与该第二拥堵指数正相关,该第一拥堵指数的加权系数与该第二拥堵指数负相关。
例如,该第二拥堵指数的加权系数可以是该第二拥堵指数的sigmoid函数。
在本实施例中,为了提高拥堵检测准确度,该装置700还可以包括:
第一背景模型提取单元705,其用于在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新所述背景模型。
第二背景模型提取单元706,其用于对该预定区域进行边缘检测或对该预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新该背景模型。
在本实施例中,第一提取单元701,计算单元702,第三检测单元703的具体实施方式可以参考实施例2中的步骤201-203,第一背景模型提取单元705和第二背景 模型提取单元706的具体实施方式可以参考实施例1中的步骤102-103重复之处不再赘述。
图8是本发明实施例交通拥堵状况检测装置的硬件构成示意图,如图8所示,交通拥堵状况检测装置800可以包括:一个接口(图中未示出),中央处理器(CPU)820,存储器810和收发器840;存储器810耦合到中央处理器820。其中存储器810可存储各种数据;此外还存储交通拥堵状况检测的程序,并且在中央处理器820的控制下执行该程序,并存储各种预设的值和预定的条件等。
在一个实施方式中,交通拥堵状况检测装置800的功能可以被集成到中央处理器820中。其中,中央处理器820可以被配置为:根据背景模型从当前图像预定区域中提取出前景图像;根据与该预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;该第二拥堵指数是该前景图像在预定区域中的前景像素的数量与该当前图像在该预定区域中的像素的数量的比值;根据该拥堵指数来确定交通拥堵状况。
其中,中央处理器820可以被配置为:对该第一拥堵指数和该第二拥堵指数进行加权处理,得到该拥堵指数。
其中,该第一拥堵指数是该移动速度的负指数函数。
其中,在该第二拥堵指数大于等于第四阈值时,确定该第二拥堵指数的加权系数小于等于该第一拥堵指数的加权系数;
在该第二拥堵指数小于该第四阈值时,确定该第二拥堵指数的加权系数大于该第一拥堵指数的加权系数;
其中,该第二拥堵指数的加权系数与该第二拥堵指数正相关,该第一拥堵指数的加权系数与该第二拥堵指数负相关。
其中,中央处理器820还可以被配置为:在该拥堵指数大于等于第二阈值时,确定该交通拥堵状况为拥堵;或者,
在该拥堵指数大于等于第二阈值且大于等于该第二阈值的时间大于等于第三阈值时,确定该交通拥堵状况为拥堵。
其中,中央处理器820还可以被配置为:检测该当前图像预定区域的移动速度。
其中,中央处理器820还可以被配置为:在该移动速度超过第一阈值时,将该预定区域中不运动的图像像素作为背景像素来更新该背景模型;和/或,对该当前图像 预定区域进行边缘检测或对该预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景来更新该背景模型。
中央处理器820的具体实施方式可以参考实施例2,此处不再重复。
在另一个实施方式中,也可以将上述交通拥堵状况检测装置800配置在与中央处理器820连接的芯片(图中未示出)上,通过中央处理器820的控制来实现交通拥堵状况检测装置800的功能。
值得注意的是,装置800也并不是必须要包括图8中所示的所有部件;此外,该装置800还可以包括图8中没有示出的部件,可以参考现有技术。
通过本实施例的上述装置,根据车辆的密度和与车辆的移动速度相关的拥堵指数来检测是否发生了交通拥堵,由此能够提高交通拥堵状况检测的准确度,鲁棒性较好且具有较强的噪声容忍度。
本发明实施例还提供一种计算机可读程序,其中当在背景模型的提取装置中执行该程序时,该程序使得计算机在该背景模型的提取装置中执行如上面实施例1中的背景模型的提取方法。
本发明实施例还提供一种存储有计算机可读程序的存储介质,其中该计算机可读程序使得计算机在背景模型的提取装置中执行上面实施例1中的背景模型的提取方法。
本发明实施例还提供一种计算机可读程序,其中当在交通拥堵状况检测装置中执行该程序时,该程序使得计算机在该交通拥堵状况检测装置中执行如上面实施例2中的交通拥堵状况检测方法。
本发明实施例还提供一种存储有计算机可读程序的存储介质,其中该计算机可读程序使得计算机在交通拥堵状况检测装置中执行上面实施例2中的交通拥堵状况检测方法。
本发明以上的装置和方法可以由硬件实现,也可以由硬件结合软件实现。本发明涉及这样的计算机可读程序,当该程序被逻辑部件所执行时,能够使该逻辑部件实现上文所述的装置或构成部件,或使该逻辑部件实现上文所述的各种方法或步骤。本发明还涉及用于存储以上程序的存储介质,如硬盘、磁盘、光盘、DVD、flash存储器等。
结合本发明实施例描述的在背景模型的提取装置中执行的背景模型的提取方法或者在交通拥堵状况检测装置中执行的交通拥堵状况检测方法可直接体现为硬件、由处理器执行的软件模块或二者组合。例如,图5-8中所示的功能框图中的一个或多个和/或功能框图的一个或多个组合,既可以对应于计算机程序流程的各个软件模块,亦可以对应于各个硬件模块。这些软件模块,可以分别对应于图1-2所示的各个步骤。这些硬件模块例如可利用现场可编程门阵列(FPGA)将这些软件模块固化而实现。
软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域已知的任何其它形式的存储介质。可以将一种存储介质耦接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息;或者该存储介质可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。该软件模块可以存储在图像形成装置的存储器中,也可以存储在可插入图像形成装置的存储卡中。
针对图5-8描述的功能框图中的一个或多个和/或功能框图的一个或多个组合,可以实现为用于执行本申请所描述功能的通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑器件、分立门或晶体管逻辑器件、分立硬件组件、或者其任意适当组合。针对图5-8描述的功能框图中的一个或多个和/或功能框图的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本发明进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本发明保护范围的限制。本领域技术人员可以根据本发明的精神和原理对本发明做出各种变型和修改,这些变型和修改也在本发明的范围内。

Claims (20)

  1. 一种背景模型的提取装置,所述装置包括:
    第一检测单元,其用于检测当前图像预定区域的移动速度;
    第一更新单元,其用于在所述移动速度超过第一阈值时,将所述预定区域中不运动的图像像素作为背景像素来更新所述背景模型。
  2. 根据权利要求2所述的装置,其中,所述装置还包括第二检测单元,其用于对所述预定区域进行运动检测,以确定运动的图像和所述不运动的图像。
  3. 根据权利要求1所述的装置,其中,所述装置还包括:
    第二更新单元,其用于对所述预定区域进行边缘检测或对所述预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新所述背景模型。
  4. 一种交通拥堵状况检测装置,包括:
    第一提取单元,其用于根据背景模型从当前图像中提取出前景图像;
    计算单元,其用于根据与所述当前图像预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;所述第二拥堵指数是所述前景图像在所述预定区域中的前景像素的数量与所述当前图像在所述预定区域中的像素的数量的比值;
    第三检测单元,其用于根据所述拥堵指数来确定交通拥堵状况。
  5. 根据权利要求4所述的装置,其中,所述计算单元用于对所述第一拥堵指数和所述第二拥堵指数进行加权处理,得到所述拥堵指数。
  6. 根据权利要求4所述的装置,其中,所述第三检测单元用于在所述拥堵指数大于等于第二阈值时,确定所述交通拥堵状况为拥堵;或者,
    所述第三检测单元用于在所述拥堵指数大于等于第二阈值且大于等于所述第二阈值的时间大于等于第三阈值时,确定所述交通拥堵状况为拥堵。
  7. 根据权利要求4所述的装置,其中,所述第一拥堵指数是所述移动速度的负指数函数。
  8. 根据权利要求4所述的装置,其中,所述装置还包括:
    第四检测单元,其用于检测所述当前图像预定区域的移动速度。
  9. 根据权利要求4所述的装置,其中,所述装置还包括:
    第一指数计算单元,其用于根据所述移动速度计算所述第一拥堵指数;
    第二指数计算单元,其用于计算所述第二拥堵指数。
  10. 根据权利要求5所述的装置,其中,所述装置还包括:
    系数确定单元,其用于在所述第二拥堵指数大于等于第四阈值时,确定所述第二拥堵指数的加权系数小于等于所述第一拥堵指数的加权系数;
    在所述第二拥堵指数小于所述第四阈值时,确定所述第二拥堵指数的加权系数大于所述第一拥堵指数的加权系数。
  11. 根据权利要求10所述的装置,其中,所述第二拥堵指数的加权系数是所述第二拥堵指数的sigmoid函数。
  12. 根据权利要求4所述的装置,其中,所述装置还包括:
    第一背景模型提取单元,其用于在所述移动速度超过第一阈值时,将所述预定区域中不运动的图像像素作为背景像素来更新所述背景模型。
  13. 根据权利要求12所述的装置,其中,所述装置还包括:
    第二背景模型提取单元,其用于对所述预定区域进行边缘检测或对所述预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新所述背景模型。
  14. 一种交通拥堵状况检测方法,其中,所述方法包括:
    根据背景模型从当前图像预定区域中提取出前景图像;
    根据与所述预定区域的移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数;所述第二拥堵指数是所述前景图像在所述预定区域中的前景像素的数量与所述当前图像在所述预定区域中的像素的数量的比值;
    根据所述拥堵指数来确定交通拥堵状况。
  15. 根据权利要求14所述的方法,其中,根据与所述移动速度呈负相关关系的第一拥堵指数和第二拥堵指数计算拥堵指数包括:
    对所述第一拥堵指数和所述第二拥堵指数进行加权处理,得到所述拥堵指数。
  16. 根据权利要求14所述的方法,其中,根据所述拥堵指数来确定交通拥堵状况包括:
    在所述拥堵指数大于等于第二阈值时,确定所述交通拥堵状况为拥堵;或者,
    在所述拥堵指数大于等于第二阈值且大于等于所述第二阈值的时间大于等于第 三阈值时,确定所述交通拥堵状况为拥堵。
  17. 根据权利要求14所述的方法,其中,所述第一拥堵指数是所述移动速度的负指数函数。
  18. 根据权利要求14所述的方法,其中,所述方法还包括:
    检测所述当前图像预定区域的移动速度。
  19. 根据权利要求15所述的方法,其中,在所述第二拥堵指数大于等于第四阈值时,确定所述第二拥堵指数的加权系数小于等于所述第一拥堵指数的加权系数;
    在所述第二拥堵指数小于所述第四阈值时,确定所述第二拥堵指数的加权系数大于所述第一拥堵指数的加权系数。
  20. 根据权利要求14所述的方法,其中,所述方法还包括:
    在所述移动速度超过第一阈值时,将所述预定区域中不运动的图像像素作为背景像素来更新所述背景模型;和/或,
    对所述预定区域进行边缘检测或对所述预定区域中的前景图像进行边缘检测,将不具有边缘的区域的图像像素作为背景像素来更新所述背景模型。
PCT/CN2016/102156 2016-10-14 2016-10-14 背景模型的提取装置、交通拥堵状况检测方法和装置 WO2018068311A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680087770.1A CN109479120A (zh) 2016-10-14 2016-10-14 背景模型的提取装置、交通拥堵状况检测方法和装置
PCT/CN2016/102156 WO2018068311A1 (zh) 2016-10-14 2016-10-14 背景模型的提取装置、交通拥堵状况检测方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/102156 WO2018068311A1 (zh) 2016-10-14 2016-10-14 背景模型的提取装置、交通拥堵状况检测方法和装置

Publications (1)

Publication Number Publication Date
WO2018068311A1 true WO2018068311A1 (zh) 2018-04-19

Family

ID=61906108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102156 WO2018068311A1 (zh) 2016-10-14 2016-10-14 背景模型的提取装置、交通拥堵状况检测方法和装置

Country Status (2)

Country Link
CN (1) CN109479120A (zh)
WO (1) WO2018068311A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967394A (zh) * 2020-08-18 2020-11-20 北京林业大学 一种基于动静态网格融合策略的森林火灾烟雾根节点检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477628A (zh) * 2009-01-06 2009-07-08 青岛海信电子产业控股股份有限公司 车辆阴影去除方法和装置
CN101729872A (zh) * 2009-12-11 2010-06-09 南京城际在线信息技术有限公司 一种基于视频监控图像自动判别道路交通状态的方法
CN104077757A (zh) * 2014-06-09 2014-10-01 中山大学 一种融合实时交通状态信息的道路背景提取与更新方法
CN104680787A (zh) * 2015-02-04 2015-06-03 上海依图网络科技有限公司 一种道路拥堵检测方法
US20150310297A1 (en) * 2014-03-03 2015-10-29 Xerox Corporation Systems and methods for computer vision background estimation using foreground-aware statistical models
US20160284097A1 (en) * 2012-06-14 2016-09-29 International Business Machines Corporation Multi-cue object detection and analysis

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110113356A1 (en) * 2009-11-12 2011-05-12 Georgy Samsonidze Visual analysis module for investigation of specific physical processes
CN101777186B (zh) * 2010-01-13 2011-12-14 西安理工大学 一种多模态自动更新替换的背景建模方法
CN102346854A (zh) * 2010-08-03 2012-02-08 株式会社理光 前景物体检测方法和设备
KR101870902B1 (ko) * 2011-12-12 2018-06-26 삼성전자주식회사 영상 처리 장치 및 영상 처리 방법
CN103377472B (zh) * 2012-04-13 2016-12-14 富士通株式会社 用于去除附着噪声的方法和系统
CN104183127B (zh) * 2013-05-21 2017-02-22 北大方正集团有限公司 交通监控视频检测方法和装置
CN105335951B (zh) * 2014-06-06 2018-06-15 株式会社理光 背景模型更新方法和设备
CN105224914B (zh) * 2015-09-02 2018-10-23 上海大学 一种基于图的无约束视频中显著物体检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477628A (zh) * 2009-01-06 2009-07-08 青岛海信电子产业控股股份有限公司 车辆阴影去除方法和装置
CN101729872A (zh) * 2009-12-11 2010-06-09 南京城际在线信息技术有限公司 一种基于视频监控图像自动判别道路交通状态的方法
US20160284097A1 (en) * 2012-06-14 2016-09-29 International Business Machines Corporation Multi-cue object detection and analysis
US20150310297A1 (en) * 2014-03-03 2015-10-29 Xerox Corporation Systems and methods for computer vision background estimation using foreground-aware statistical models
CN104077757A (zh) * 2014-06-09 2014-10-01 中山大学 一种融合实时交通状态信息的道路背景提取与更新方法
CN104680787A (zh) * 2015-02-04 2015-06-03 上海依图网络科技有限公司 一种道路拥堵检测方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967394A (zh) * 2020-08-18 2020-11-20 北京林业大学 一种基于动静态网格融合策略的森林火灾烟雾根节点检测方法
CN111967394B (zh) * 2020-08-18 2024-05-17 北京林业大学 一种基于动静态网格融合策略的森林火灾烟雾根节点检测方法

Also Published As

Publication number Publication date
CN109479120A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
Sengar et al. Moving object detection based on frame difference and W4
JP6547990B2 (ja) 半自動画像セグメンテーション
WO2020151172A1 (zh) 运动目标检测方法、装置、计算机设备及存储介质
US9767570B2 (en) Systems and methods for computer vision background estimation using foreground-aware statistical models
US9846810B2 (en) Method, system and apparatus for tracking objects of a scene
WO2017084094A1 (zh) 烟雾检测装置、方法以及图像处理设备
US20130148852A1 (en) Method, apparatus and system for tracking an object in a sequence of images
US10445590B2 (en) Image processing apparatus and method and monitoring system
WO2022027931A1 (zh) 基于视频图像的运动车辆前景检测方法
JP2018205920A (ja) 学習プログラム、学習方法および物体検知装置
JP5360052B2 (ja) 物体検出装置
CN109919002B (zh) 黄色禁停线识别方法、装置、计算机设备及存储介质
WO2018068300A1 (zh) 图像处理方法和装置
US11107237B2 (en) Image foreground detection apparatus and method and electronic device
WO2021013049A1 (zh) 前景图像获取方法、前景图像获取装置和电子设备
KR20090062049A (ko) 영상 데이터 압축 전처리 방법 및 이를 이용한 영상 데이터압축 방법과, 영상 데이터 압축 시스템
WO2012174804A1 (zh) 视频中剧烈运动的检测方法及其装置
US20140056519A1 (en) Method, apparatus and system for segmenting an image in an image sequence
CN103700087A (zh) 移动侦测方法和装置
WO2021227723A1 (zh) 目标检测方法、装置、计算机设备及可读存储介质
US20170103536A1 (en) Counting apparatus and method for moving objects
Paul et al. Moving object detection using modified temporal differencing and local fuzzy thresholding
Toral et al. Improved sigma-delta background estimation for vehicle detection
WO2018068311A1 (zh) 背景模型的提取装置、交通拥堵状况检测方法和装置
KR101851492B1 (ko) 번호판 인식 방법 및 번호판 인식 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16918561

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16918561

Country of ref document: EP

Kind code of ref document: A1