CN112215109A - Vehicle detection method and system based on scene analysis - Google Patents

Vehicle detection method and system based on scene analysis Download PDF

Info

Publication number
CN112215109A
CN112215109A CN202011053608.2A CN202011053608A CN112215109A CN 112215109 A CN112215109 A CN 112215109A CN 202011053608 A CN202011053608 A CN 202011053608A CN 112215109 A CN112215109 A CN 112215109A
Authority
CN
China
Prior art keywords
image
background
vehicle
gradient
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011053608.2A
Other languages
Chinese (zh)
Inventor
刘军发
郑爱兵
刘宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genepoint Beijing Technology Co ltd
Original Assignee
Genepoint Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genepoint Beijing Technology Co ltd filed Critical Genepoint Beijing Technology Co ltd
Priority to CN202011053608.2A priority Critical patent/CN112215109A/en
Publication of CN112215109A publication Critical patent/CN112215109A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention provides a vehicle detection method and a system based on scene analysis, wherein the system comprises the following steps: the moving target detection module is used for acquiring a scene video image, performing background modeling in a self-adaptive mode to acquire a background image and a foreground image and segmenting a moving target; the moving shadow removing module is used for judging whether gradient characteristics of the background image need to be reserved or not and removing the moving shadow in the moving target based on a judgment result; and the vehicle detection and segmentation module is used for judging whether the vehicle is adhered or shielded, if so, segmenting the vehicle and obtaining the correct detection and segmentation of the vehicle, and if not, directly obtaining the correct detection and segmentation of the vehicle. The scheme obviously improves the accuracy of vehicle identification and monitoring in a complex traffic scene, and has the advantages of strong background modeling stability, low resource consumption and high identification speed.

Description

Vehicle detection method and system based on scene analysis
Technical Field
The invention relates to the field of intelligent traffic and the field of combination of Internet of things communication and intelligent traffic, in particular to a vehicle intelligent detection related method and a vehicle intelligent detection related system based on scene analysis.
Background
As an important component of intelligent transportation, vehicle detection has important research value and wide application prospect. A video-based vehicle detection technology belongs to the cross field of multiple disciplines such as computer vision, pattern recognition, signal analysis and the like. The real-time detection and tracking technology of the vehicle is one of the important technologies of the intelligent transportation system, the vehicle detection and tracking are carried out in complex scenes such as an expressway or an intersection, and useful traffic information such as traffic flow, vehicle running speed, traffic flow peak period distribution, lane occupancy and the like can be obtained through statistics. The information can be used for follow-up intelligent analysis, such as traffic guidance, congestion dredging, monitoring and even preventing sudden conditions of road faults, traffic accidents and the like.
At present, many technologies are applied to detection of traffic vehicles, such as electromagnetic induction technology, ultrasonic technology, geographic induction technology, video image processing technology, and the like. The video-based detection and intelligent analysis has remarkable characteristics, the monitoring video information amount is large, and the alarm can be rapidly given and the picture can be automatically recorded or snap shot when traffic jam and vehicle violation conditions occur, so that the video-based detection and intelligent analysis can be used as a basis for violation punishment or accident liability judgment. In addition, the video monitoring equipment is easy to install and disassemble, and the video monitoring system is convenient for function upgrading and the like. Recently, with the continuous development of computer technology, image processing technology has become mature, and provides a powerful theoretical and technical support for motion detection technology based on video analysis, so that vehicle detection based on video is favored by people, and other vehicle detection technologies are gradually replaced. However, because the position of the camera at the traffic intersection is relatively fixed, the background of the acquired video image sequence changes slowly, and the real-time requirement on the processing of the monitoring system is high, so that moving vehicles at the traffic intersection are adhered and shielded, and the vehicles at the traffic intersection cannot be effectively detected and analyzed. Therefore, an efficient and accurate vehicle detection and analysis method is more needed for traffic monitoring scene videos, especially for scenes with complex traffic conditions such as intersections.
Disclosure of Invention
In view of the above, the present invention provides a vehicle detection method and system based on scene analysis, which researches a related technology of moving vehicle detection and identification based on video analysis and image processing for a traffic monitoring scene video, especially an intersection scene. Specifically, the invention provides the following technical scheme:
in one aspect, an embodiment of the present invention provides a vehicle detection method based on scene analysis, where the method includes:
step 1, obtaining a scene video image, carrying out background modeling in a self-adaptive mode to obtain a background image and a foreground image, and segmenting a moving target;
step 2, if the gradient feature of the background image is reserved, executing: graying the original image, and performing AND operation on the grayed original image and the binary image of the foreground image to obtain a foreground template image; performing AND operation on the binary images of the background image and the foreground image to obtain a background template image; removing background pixels contained in the foreground template picture through edge corrosion to obtain a corroded foreground template picture and a corroded background template picture; removing the motion shadow in the motion target in a gradient difference mode based on the corroded foreground template picture and the corroded background template picture;
if the gradient feature of the background image is not preserved, performing: based on the background image and the foreground image, removing the moving shadow in the moving target in a gradient difference mode;
and 3, judging whether the vehicle is adhered or shielded, if so, segmenting the vehicle, and obtaining the correct detection and segmentation of the vehicle, and if not, directly obtaining the correct detection and segmentation of the vehicle.
Preferably, in step 1, the adaptive mode performs background modeling, and the specific mode is as follows: in the initial modeling stage, the arithmetic mean value of the pixels of the continuous n frames of images is taken as the background pixel; when the time T increases to a certain length of time, preferably 10 seconds, a gaussian weighted average is used instead of the arithmetic average; and after extracting the background image based on the Gaussian weighted average value, dynamically updating the background image in real time.
Preferably, the gaussian weighted average modeling manner is:
Bn+1(x,y)=αBn(x,y)+(1-α)In(x,y)
wherein alpha is the learning rate and takes the value of (0, 1)];Bn+1(x, y) is the Gaussian weighted average of the n +1 th frame pixels, Bn(x, y) is the Gaussian weighted average of the n-th frame pixels, In(x, y) pixel value of the nth frame image at point (x, y).
Preferably, the real-time dynamic update of the background image is performed by the following method:
when the background image is updated, only the background pixel in the current frame image is used for updating the background image area, and the method is as follows:
Figure BDA0002710276100000031
wherein M isn(x, y) is a binary mask image, and specifically includes:
Figure BDA0002710276100000032
wherein T is a binarization threshold value, alpha is a learning rate, and the value is (0, 1)];Bn+1(x, y) is the Gaussian weighted average of the n +1 frame pixels, Bn(x, y) is the Gaussian weighted average of the n-th frame pixels, In(x, y) pixel value of the nth frame image at point (x, y).
Preferably, in the step 2, removing the moving shadow in the moving object by means of gradient difference includes: gradient information in the current image is obtained by using a gradient operator and is recorded as g (x, y), and the gradient operator is used for carrying out convolution with the current image to obtain a gradient value;
obtaining gradient maps corresponding to the foreground and the background respectively by using the following formula:
Figure BDA0002710276100000041
wherein G (x, y) is a pixel value at a point (x, y);
and then, the difference is made by utilizing the gradient images respectively corresponding to the foreground and the background, and the moving shadow is removed.
Preferably, in step 3, the vehicle is divided by:
dividing the whole area of the scene video image into different lanes, and drawing virtual lane lines at the edges of the lanes;
setting a detection area of each lane based on the virtual lane line;
and based on the target position, the area and the main color of the vehicle body of each vehicle, performing region combination on the two sides of the virtual lane line and the regions to realize the segmentation of the adhered vehicles.
In addition, the invention also provides a vehicle detection system based on scene analysis, which comprises:
the moving target detection module is used for acquiring a scene video image, performing background modeling in a self-adaptive mode to acquire a background image and a foreground image and segmenting a moving target;
the moving shadow removing module is used for judging whether the gradient characteristics of the background image need to be reserved or not; and for
When the gradient characteristics of the background image are reserved, graying the original image, and performing AND operation on the grayed original image and the binary image of the foreground image to obtain a foreground template image; performing AND operation on the binary images of the background image and the foreground image to obtain a background template image; removing background pixels contained in the foreground template picture through edge corrosion to obtain a corroded foreground template picture and a corroded background template picture; removing the motion shadow in the motion target in a gradient difference mode based on the corroded foreground template picture and the corroded background template picture;
when the gradient feature of the background image is not reserved, removing the moving shadow in the moving target in a gradient difference mode based on the background image and the foreground image;
and the vehicle detection and segmentation module is used for judging whether the vehicle is adhered or shielded, if so, segmenting the vehicle and obtaining the correct detection and segmentation of the vehicle, and if not, directly obtaining the correct detection and segmentation of the vehicle.
Preferably, the adaptive mode performs background modeling, and the specific mode is as follows: in the initial modeling stage, the background pixels are the arithmetic mean value of the pixels of the continuous n frames of images; when the time T is increased to a certain time period, preferably, the certain time period is 10 seconds, replacing the arithmetic mean value with a gaussian weighted mean value; and after extracting the background image based on the Gaussian weighted average value, dynamically updating the background image in real time.
Preferably, the removing the moving shadow in the moving object by means of the gradient difference includes: gradient information in the current image is obtained by using a gradient operator and is recorded as g (x, y), and the gradient operator is used for carrying out convolution with the current image to obtain a gradient value;
obtaining gradient maps corresponding to the foreground and the background respectively by using the following formula:
Figure BDA0002710276100000051
wherein G (x, y) is a pixel value at a point (x, y);
and then, the difference is made by utilizing the gradient images respectively corresponding to the foreground and the background, and the moving shadow is removed.
Preferably, the vehicle is divided by:
dividing the whole area of the scene video image into different lanes, and drawing virtual lane lines at the edges of the lanes;
setting a detection area of each lane based on the virtual lane line;
and based on the target position, the area and the main color of the vehicle body of each vehicle, performing region combination on the two sides of the virtual lane line and the regions to realize the segmentation of the adhered vehicles.
In addition, the invention also provides a vehicle detection device based on scene analysis, which at least comprises a processor and a storage device connected with the processor, wherein the storage device stores instructions which can be read and executed by the processor, and the instructions can execute the vehicle detection method based on scene analysis.
Compared with the prior art, the technical scheme of the invention develops research and development of a video-based vehicle detection and analysis method and system aiming at traffic scene videos, particularly intersection traffic scenes, utilizes a video image processing technology to carry out video intelligent analysis-based moving vehicle robust detection on the acquired traffic scene videos, and vehicle type identification and other key technical researches based on visual analysis, develops a visual-based vehicle type identification system, obviously improves vehicle identification and monitoring accuracy in complex traffic scenes, has strong background modeling stability, can effectively detect and partition vehicle targets which are adhered and shielded, has small consumption of hardware resources and high identification speed in the whole scheme, and has wide application prospect in traffic monitoring videos and other scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without any creative effort.
FIG. 1 is a flow chart of the overall operation of the system of an embodiment of the present invention;
FIG. 2 is a diagram illustrating background modeling results according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating comparison of moving shadow removal effects according to an embodiment of the present invention, wherein a is a foreground binary image, b is a foreground template image, and c is a background template image;
FIG. 4 is a schematic diagram illustrating gradient-based shadow removal comparison with edge erosion, where a is a foreground template map after edge erosion, b is a background template map after edge erosion, c is a vehicle region after shadow removal, d is a foreground gradient map, e is a background gradient map, and f is a gradient difference map;
fig. 5 is a schematic view of vehicle segmentation based on virtual lane lines according to an embodiment of the present invention, where a is an original image of virtual lane segmentation, b is a foreground binary image obtained by segmentation, and c, d, and e are foreground binary images extracted from respective detection regions of three lanes, respectively;
fig. 6 is a schematic diagram of object segmentation based on virtual lane lines according to an embodiment of the present invention, where a is a final merging and segmentation result of the moving vehicles, b is a foreground binary image, and c is a sub-image of the processed white vehicle.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
It will be appreciated by those of skill in the art that the following specific examples or embodiments are a series of presently preferred arrangements of the invention to further explain specific summary of the invention, and that such arrangements may be combined or otherwise used in conjunction with one another unless it is specifically contemplated that some or some of the specific examples or embodiments may not be combined or used in conjunction with other examples or embodiments. Meanwhile, the following specific examples or embodiments are only provided as an optimized arrangement mode and are not to be understood as limiting the protection scope of the present invention.
In a specific embodiment, the technical scheme provided by the invention can be realized through a series of vehicle detection methods, specifically, firstly, an improved background modeling method is utilized to extract a moving foreground area, the method improves the stability of a background model, and meanwhile, a gradient difference-based moving shadow removing method is adopted to effectively divide a moving vehicle based on a virtual lane line, so as to detect and divide a vehicle target which is adhered and blocked, thereby obtaining vehicle target information.
The embodiments related to the present solution are explained in detail below with reference to the drawings. The method provided by the invention can be realized by the following preferred modes in combination with the method shown in FIG. 1:
step 101, modeling by using a mean value method, wherein the background pixel is an arithmetic mean value of the pixels of the continuous n frames of images, as shown in the following formula.
Figure BDA0002710276100000081
Wherein n is the current frame number, Bn(x, y) and Ik(x, y) are pixel values of the background image and the k-th frame image at the point (x, y), respectively. Expressed in incremental form as
Figure BDA0002710276100000082
Step 102, as the time T advances, the number n of the video frames continuously increases, and the influence weight of the current frame image on the background decreases, so that the change of the background in the modeling process cannot be adaptively adjusted in time, and the background modeling method needs to be adjusted.
Step 103, replacing the arithmetic mean with the Gaussian weighted mean, the modeling formula becomes
Bn+1(x,y)=αBn(x,y)+(1-α)In(x,y) (3)
In the method, alpha is a learning rate and is a proportional parameter value (0, 1), so that the influence weight of the nth frame image on the background is a fixed value, and the time of background image adaptability adjustment in the model building process has no relation with the running time.
Step 104, after the clean background image is extracted in the modeling stage, the background image needs to be dynamically updated in real time to adapt to the change. In order to avoid the interference of vehicles stopping at the intersection for a long time on the scene background, the background updating method is improved for the intersection monitoring video, and only background pixels in the current frame image are used for updating the background area, as shown in the following formula:
Figure RE-GDA0002821037720000093
wherein M isn(x, y) is a binary mask image, which is obtained by differentiating the previous frame image and the background image, and the following formula is given.
Figure BDA0002710276100000084
The value of T is a binarization threshold, the value of T needs to be considered in a trade-off manner, if the value is too small, the change of the background cannot be updated in real time, and if the value is too large, the foreground pixels cannot be well prevented from being merged into the background image.
And 105, updating the area which is the background in the current frame in real time as the time T increases, wherein the background pixels containing the target object area are kept unchanged. Avoiding the situation of blending the moving pixels into the background image.
Step 106, segmenting the moving object, and the updated background image of the scene is shown in fig. 2.
In step 107, after the moving object segmentation step, the moving shadow may be detected as a part of the object, and may also cause two moving objects to be stuck, thereby causing false detection or missed detection of the vehicle count. Since the gradient characteristics of the background can be well preserved by the moving shadow, a moving shadow removing method based on gradient difference is adopted. In a preferred mode, a judging step of judging whether the gradient feature of the background needs to be preserved or not can be added in the step.
If the judgment result in the step 107 is yes, executing a step 108, graying the original image, and performing an and operation on the foreground binary image (fig. 3a) to obtain a foreground template image (fig. 3 b); and (5) obtaining a background template picture through the background picture and the foreground binary picture (figure 3 c).
Then, step 109 is executed, the background pixels included in the foreground template are removed by using an edge etching method to obtain a foreground template image and a background template image, where fig. 4a and 4b are the foreground template image and the background template image after edge etching, respectively.
Then, step 110 is performed to obtain the shadow-removed vehicle area. The method comprised in the present invention is as follows:
and solving gradient information by adopting a horizontal operator, a vertical operator and gradient operators in two diagonal directions with small calculation amount, and recording the gradient information as g (x, y). And the four operators are used for convolution with the current frame image, so that gradient values in the horizontal direction, the vertical direction and the two diagonal directions of the image can be respectively extracted.
The pixel values of the points with coordinates (x, y) in the gradient images of the foreground template and the background template are calculated by formula 6, and a foreground gradient map (fig. 4d) and a background gradient map (fig. 4e) are obtained. The shadow and the edge of the road surface are simultaneously embodied in two gradient maps, the two maps are differentiated to obtain a gradient difference map (fig. 4f, a morphological dilation operation is performed), wherein the moving shadow is removed, and fig. 4c is a vehicle area with the shadow finally removed.
Figure BDA0002710276100000101
If the result of step 107 is no, step 110 is directly executed to obtain the vehicle area without shadow removal. In this case, when step 110 is executed, only the segmented foreground image and the obtained background image need to be directly subjected to a gradient difference method to obtain the shadow-removed vehicle region.
And then, step 111 is executed to judge whether the vehicle is stuck or blocked, and when a plurality of vehicle targets are stuck or blocked, target blocks are fused into the foreground binary image.
If the return result of step 111 is yes, step 112 is executed to divide the virtual lane and detect the vehicle target. The invention provides a method for segmenting a moving vehicle based on fusion of adjacent frame image information and a virtual lane line, which comprises the following steps:
the roads at the traffic intersection are divided into different lanes by white lane lines, most vehicles run according to the lanes, and cross-lane running or lane change can occur in few cases. Based on the scene characteristics, the whole monitoring area is divided into different lanes, a real lane is simulated by a method of drawing virtual lane lines on the edges of the lanes, and a detection area is set for each lane. As shown in fig. 5a, fig. 5a is an original image, fig. 5b is a foreground binary image obtained by segmentation, and fig. 5c, fig. 5d, and fig. 5e are foreground binary images extracted from respective detection regions of three lanes.
And then, executing the step 113, combining the regions, dividing the adhered vehicles, combining the regions of the connected regions at the two sides of the lane line according to the position and the area of each vehicle target and the main color characteristics of the vehicle body, and dividing the adhered vehicles so as to obtain independent vehicle regions. FIG. 6b is a foreground binary image showing the occurrence of significant sticking in the vehicle; fig. 6c is a sub-image of the processed white vehicle.
Then step 114 is performed and the vehicle is correctly detected and segmented. Fig. 6a shows the final merging and segmentation results of the moving vehicles obtained by the algorithm, and four vehicles appearing in the monitored scene are all correctly detected and segmented.
If the result returned in step 111 is no, step 114 is executed directly and the vehicle is detected correctly.
In addition, the technical solution of the present invention can also be implemented in a vehicle detection system based on scene analysis, so as to implement the vehicle detection method provided by the present invention, and in a specific embodiment, the system includes:
the moving target detection module is used for acquiring a scene video image, performing background modeling in a self-adaptive mode to acquire a background image and a foreground image and segmenting a moving target;
the moving shadow removing module is used for judging whether the gradient characteristics of the background image need to be reserved or not; and for
When the gradient characteristics of the background image are reserved, graying the original image, and performing AND operation on the grayed original image and the binary image of the foreground image to obtain a foreground template image; performing AND operation on the binary images of the background image and the foreground image to obtain a background template image; removing background pixels contained in the foreground template picture through edge corrosion to obtain a corroded foreground template picture and a corroded background template picture; removing the motion shadow in the motion target in a gradient difference mode based on the corroded foreground template picture and the corroded background template picture;
when the gradient feature of the background image is not reserved, removing the moving shadow in the moving target in a gradient difference mode based on the background image and the foreground image;
and the vehicle detection and segmentation module is used for judging whether the vehicle is adhered or shielded, if so, segmenting the vehicle and obtaining the correct detection and segmentation of the vehicle, and if not, directly obtaining the correct detection and segmentation of the vehicle.
Preferably, the adaptive mode performs background modeling, and the specific mode is as follows: in the initial modeling stage, the background pixels are the arithmetic mean value of the pixels of the continuous n frames of images; replacing the arithmetic mean with a gaussian weighted mean when the time T increases to 10 seconds; and after extracting the background image based on the Gaussian weighted average value, dynamically updating the background image in real time.
Preferably, the removing the moving shadow in the moving object by means of the gradient difference includes: gradient information in the current image is obtained by using a gradient operator and is recorded as g (x, y), and the gradient operator is used for carrying out convolution with the current image to obtain a gradient value;
obtaining gradient maps corresponding to the foreground and the background respectively by using the following formula:
Figure BDA0002710276100000121
wherein G (x, y) is a pixel value at a point (x, y);
and then, the difference is made by utilizing the gradient images respectively corresponding to the foreground and the background, and the moving shadow is removed.
Preferably, the vehicle is divided by:
dividing the whole area of the scene video image into different lanes, and drawing virtual lane lines at the edges of the lanes;
setting a detection area of each lane based on the virtual lane line;
and based on the target position, the area and the main color of the vehicle body of each vehicle, performing region combination on the two sides of the virtual lane line and the regions to realize the segmentation of the adhered vehicles.
In addition, the invention also provides a vehicle detection device based on scene analysis, which at least comprises a processor and a storage device connected with the processor, wherein the storage device stores instructions which can be read and executed by the processor, and the instructions can execute the vehicle detection method based on scene analysis.
In a more specific embodiment, the technical scheme provided by the invention is applied to the traffic detection and implementation of a cross intersection in a certain city. After the vehicle enters the intersection:
(1) a camera at a traffic intersection starts to acquire video images;
(2) the vehicle detection system starts to construct a self-adaptive background model, and continuously updates along with the increase of time T to segment a moving target;
(3) the vehicle detection system adopts a gradient difference-based motion shadow removal method to obtain a shadow-removed vehicle region;
(4) after the vehicle detection system completes the robust modeling of the background and extracts the foreground vehicle region, four vehicles appearing in the monitored scene are correctly detected and segmented based on a moving vehicle segmentation method fusing adjacent frame image information and based on a virtual lane line.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A vehicle detection method based on scene analysis is characterized by comprising the following steps:
step 1, obtaining a scene video image, carrying out background modeling in a self-adaptive mode to obtain a background image and a foreground image, and segmenting a moving target;
step 2, if the gradient feature of the background image is reserved, executing: graying the original image, and performing AND operation on the grayed original image and the binary image of the foreground image to obtain a foreground template image; performing AND operation on the binary images of the background image and the foreground image to obtain a background template image; removing background pixels contained in the foreground template picture through edge corrosion to obtain a corroded foreground template picture and a corroded background template picture; removing the motion shadow in the motion target in a gradient difference mode based on the corroded foreground template picture and the corroded background template picture;
if the gradient feature of the background image is not preserved, performing: based on the background image and the foreground image, removing the moving shadow in the moving target in a gradient difference mode;
and 3, judging whether the vehicle is adhered or shielded, if so, segmenting the vehicle, and obtaining the correct detection and segmentation of the vehicle, and if not, directly obtaining the correct detection and segmentation of the vehicle.
2. The method according to claim 1, wherein in step 1, the adaptive model performs background modeling by: in the initial modeling stage, the background pixel is the arithmetic mean value of the pixels of the continuous n frames of images; replacing the arithmetic mean with a gaussian weighted mean when the time T increases to a certain duration; and after extracting the background image based on the gaussian weighted average, dynamically updating the background image in real time, preferably, the certain time duration is 10 seconds.
3. The method of claim 2, wherein the gaussian weighted average is modeled by:
Bn+1(x,y)=αBn(x,y)+(1-α)In(x,y)
wherein alpha is the learning rate and takes the value of (0, 1)];Bn+1(x, y) is a Gaussian weighted average of the n +1 th frame pixels, Bn(x, y) is the Gaussian weighted average of the n-th frame pixels, In(x, y) pixel value of the nth frame image at point (x, y).
4. The method of claim 2, wherein the real-time dynamic updating of the background image is performed by:
when the background image is updated, only the background pixel in the current frame image is used for updating the background image area, and the method is as follows:
Figure FDA0002710276090000021
wherein M isn(x, y) is a binary mask image, and specifically includes:
Figure FDA0002710276090000022
wherein T is a binarization threshold value, alpha is a learning rate, and the value is (0, 1)];Bn+1(x, y) is a Gaussian weighted average of the n +1 th frame pixels, Bn(x, y) is the Gaussian weighted average of the n-th frame pixels, In(x, y) pixel value of the nth frame image at point (x, y).
5. The method according to claim 1, wherein in the step 2, the moving shadow in the moving object is removed by means of gradient difference, and the method comprises the following steps: gradient information in the current image is obtained by using a gradient operator and is recorded as g (x, y), and the gradient operator is used for carrying out convolution with the current image to obtain a gradient value;
obtaining gradient maps corresponding to the foreground and the background respectively by using the following formula:
Figure FDA0002710276090000023
wherein G (x, y) is a pixel value at a point (x, y);
and then, the difference is made by utilizing the gradient images respectively corresponding to the foreground and the background, and the moving shadow is removed.
6. The method of claim 1, wherein in step 3, the vehicle is segmented by:
dividing the whole area of the scene video image into different lanes, and drawing virtual lane lines at the edges of the lanes;
setting a detection area of each lane based on the virtual lane line;
and based on the target position, the area and the main color of the vehicle body of each vehicle, performing region merging on the two sides of the virtual lane line and the regions to realize the segmentation of the adhered vehicles.
7. A vehicle detection system based on scene analysis, the system comprising:
the moving target detection module is used for acquiring a scene video image, performing background modeling in a self-adaptive mode to acquire a background image and a foreground image and segmenting a moving target;
the moving shadow removing module is used for judging whether the gradient characteristics of the background image need to be reserved or not; and for
When the gradient characteristics of the background image are reserved, graying the original image, and performing AND operation on the grayed original image and the binary image of the foreground image to obtain a foreground template image; performing AND operation on the binary images of the background image and the foreground image to obtain a background template image; removing background pixels contained in the foreground template picture through edge corrosion to obtain a corroded foreground template picture and a corroded background template picture; removing the motion shadow in the motion target in a gradient difference mode based on the corroded foreground template picture and the corroded background template picture;
when the gradient feature of the background image is not reserved, removing the moving shadow in the moving target in a gradient difference mode based on the background image and the foreground image;
and the vehicle detection and segmentation module is used for judging whether the vehicle is adhered or shielded, if so, segmenting the vehicle and obtaining the correct detection and segmentation of the vehicle, and if not, directly obtaining the correct detection and segmentation of the vehicle.
8. The system of claim 7, wherein the adaptive approach performs background modeling by: in the initial modeling stage, the background pixel is the arithmetic mean value of the pixels of the continuous n frames of images; replacing the arithmetic mean with a gaussian weighted mean when the time T increases to a certain duration; and after extracting the background image based on the gaussian weighted average, dynamically updating the background image in real time, preferably, the certain time duration is 10 seconds.
9. The system of claim 7, wherein the removing of the moving shadow in the moving object by means of the gradient difference comprises: gradient information in the current image is obtained by using a gradient operator and is recorded as g (x, y), and the gradient operator is used for carrying out convolution with the current image to obtain a gradient value;
obtaining gradient maps corresponding to the foreground and the background respectively by using the following formula:
Figure FDA0002710276090000041
wherein G (x, y) is a pixel value at a point (x, y);
and then, the difference is made by utilizing the gradient images respectively corresponding to the foreground and the background, and the moving shadow is removed.
10. The system of claim 7, wherein the vehicle is segmented by:
dividing the whole area of the scene video image into different lanes, and drawing virtual lane lines at the edges of the lanes;
setting a detection area of each lane based on the virtual lane line;
and based on the target position, the area and the main color of the vehicle body of each vehicle, performing region merging on the two sides of the virtual lane line and the regions to realize the segmentation of the adhered vehicles.
CN202011053608.2A 2020-09-29 2020-09-29 Vehicle detection method and system based on scene analysis Pending CN112215109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011053608.2A CN112215109A (en) 2020-09-29 2020-09-29 Vehicle detection method and system based on scene analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011053608.2A CN112215109A (en) 2020-09-29 2020-09-29 Vehicle detection method and system based on scene analysis

Publications (1)

Publication Number Publication Date
CN112215109A true CN112215109A (en) 2021-01-12

Family

ID=74052096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011053608.2A Pending CN112215109A (en) 2020-09-29 2020-09-29 Vehicle detection method and system based on scene analysis

Country Status (1)

Country Link
CN (1) CN112215109A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463973A (en) * 2022-01-29 2022-05-10 北京科技大学天津学院 Traffic state detection method based on images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463973A (en) * 2022-01-29 2022-05-10 北京科技大学天津学院 Traffic state detection method based on images
CN114463973B (en) * 2022-01-29 2022-10-04 北京科技大学天津学院 Image-based traffic state detection method

Similar Documents

Publication Publication Date Title
CN110178167B (en) Intersection violation video identification method based on cooperative relay of cameras
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
Kastrinaki et al. A survey of video processing techniques for traffic applications
Beymer et al. A real-time computer vision system for measuring traffic parameters
Hadi et al. Vehicle detection and tracking techniques: a concise review
CN100502463C (en) Method for collecting characteristics in telecommunication flow information video detection
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN107301375B (en) Video image smoke detection method based on dense optical flow
Zang et al. Object classification and tracking in video surveillance
CN113763427A (en) Multi-target tracking method based on coarse-fine shielding processing
Ghahremannezhad et al. A new adaptive bidirectional region-of-interest detection method for intelligent traffic video analysis
Kanhere et al. Vehicle segmentation and tracking in the presence of occlusions
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Beymer et al. Tracking vehicles in congested traffic
Ren et al. Lane detection in video-based intelligent transportation monitoring via fast extracting and clustering of vehicle motion trajectories
Kanhere et al. Real-time detection and tracking of vehicle base fronts for measuring traffic counts and speeds on highways
SuganyaDevi et al. Efficient foreground extraction based on optical flow and smed for road traffic analysis
CN112215109A (en) Vehicle detection method and system based on scene analysis
Balisavira et al. Real-time object detection by road plane segmentation technique for ADAS
Lee et al. A cumulative distribution function of edge direction for road-lane detection
CN115100650A (en) Expressway abnormal scene denoising and identifying method and device based on multiple Gaussian models
Wang et al. A video traffic flow detection system based on machine vision
Ren et al. High-efficient detection of traffic parameters by using two foreground temporal-spatial images
Cheng et al. Application of convolutional neural network technology in vehicle parking management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination