CN109766846B - Video-based self-adaptive multi-lane traffic flow detection method and system - Google Patents
Video-based self-adaptive multi-lane traffic flow detection method and system Download PDFInfo
- Publication number
- CN109766846B CN109766846B CN201910034729.3A CN201910034729A CN109766846B CN 109766846 B CN109766846 B CN 109766846B CN 201910034729 A CN201910034729 A CN 201910034729A CN 109766846 B CN109766846 B CN 109766846B
- Authority
- CN
- China
- Prior art keywords
- lane
- image
- video image
- background model
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a video-based self-adaptive multi-lane traffic flow detection method and a system, wherein the method comprises the following steps: step 1, establishing a lane model and a background model according to an acquired lane video image; and 2, identifying the vehicle in the lane video image by using the established lane model and the background model. According to the invention, the vehicle detection is carried out by establishing the background model, and meanwhile, the vehicle detection of the lane division is realized by establishing the lane model.
Description
Technical Field
The invention relates to the field of traffic flow detection, in particular to a video-based self-adaptive multi-lane traffic flow detection method and system.
Background
Traffic flow data acquisition is the basis of intelligent traffic systems, and video-based acquisition systems are widely used. The acquisition system inputs video data of the traffic monitoring camera, identifies vehicles in the road from the picture, counts the vehicles and outputs time sequence data. The existing acquisition system can only acquire the whole traffic flow condition of the road. In the case of multiple lanes, traffic data of each lane cannot be collected separately, and traffic data of separate lanes is more useful for an intelligent traffic system. In addition, the existing system has low resistance to road environment illumination change, and influences the accuracy of traffic flow statistics.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems, the self-adaptive multi-lane traffic flow detection method and system based on the video are provided, and multi-lane traffic flow data are respectively acquired.
The technical scheme adopted by the invention is as follows:
a video-based adaptive multi-lane traffic flow detection method comprises the following steps:
step 1, establishing a lane model and a background model according to an acquired lane video image;
and 2, identifying the vehicle in the lane video image by using the established lane model and the background model.
Further, in step 1, the method for establishing the lane model according to the acquired lane video image specifically includes:
step 1.1.1, filtering a lane video image in an HLS color space according to lane line colors;
step 1.1.2, removing noise from the lane video image filtered in the step 1.1.1 through morphological operation to obtain candidate pixels;
step 1.1.3, performing straight line fitting on candidate pixels by adopting Hough transformation to obtain candidate straight lines;
step 1.1.4, extracting a lane line by adopting a mode of calculating a straight line vanishing point for the candidate straight line;
and 1.1.5, dividing the lane video image into longitudinal areas corresponding to different lanes by using the extracted lane lines, and establishing a lane model.
Further, in step 1, the method for establishing the background model according to the acquired lane video image specifically includes:
step 1.2.1, acquiring a front T frame image of a lane video image;
step 1.2.2, after accumulating the previous T frame images, calculating a pixel value average avg; after accumulating the frame differences of the previous T frame images, calculating a frame difference average value diff;
step 1.2.3, establishing a background model with pixel values in the range of (avg-diff) to (avg+diff).
Further, in step 1, after the background model is established, the background model is updated by evaluating the average brightness, which specifically includes:
step 1.3.1, after a background model is established, calculating and storing the average brightness of the established background model;
step 1.3.2, calculating average brightness of a lane video image acquired subsequently;
and 1.3.3, comparing the difference value between the average brightness of the current background model and the average brightness of the lane video image acquired later, and updating the background model if the difference value is larger than the average brightness set threshold value.
Further, the method for calculating the average brightness specifically comprises the following steps:
step 1.4.1, converting the image into YUV color space, and extracting Y channel gray scale image;
step 1.4.2, calculating a gray level histogram of the Y-channel gray level image, and judging whether the proportion of the brightness value of the gray level histogram larger than the light spot brightness set threshold exceeds the light spot proportion set threshold or not: if not, taking the average value of the gray level histogram as average brightness; if the brightness value exceeds the preset brightness value, searching a connected domain in the Y-channel gray level image by using the maximum brightness value as a seed, removing the found connected domain from the Y-channel gray level image, and taking the average value of the gray level histogram of the Y-channel gray level image after removing the connected domain as average brightness.
Further, in step 2, the method for identifying the vehicle in the lane video image by using the established lane model and the background model specifically includes:
step 2.1, carrying out background difference on a current frame of the lane video image and a background model to obtain a foreground target image;
step 2.2, carrying out morphological operation on the foreground target image, searching for a communication region, and taking the searched communication region as a candidate target;
step 2.3, identifying the candidate target by the vehicle;
and 2.4, judging the lane where the identified vehicle is located by using the lane model, and outputting the result of the lane-dividing vehicle identification.
Further, in step 2.3, the method for identifying the candidate target by the vehicle specifically includes: and setting a virtual coil at a certain position of the image, and judging whether the candidate target is a vehicle according to the morphological characteristics of the vehicle and the duration time of the candidate target in the virtual coil if the candidate target enters the virtual coil.
The self-adaptive multi-lane traffic flow detection system is connected with a traffic monitoring camera for acquiring lane video images; the adaptive multi-lane traffic detection system includes:
the lane detection module is used for establishing a lane model according to the acquired lane video image;
the background detection module is used for establishing a background model according to the acquired lane video image;
and the vehicle detection module is used for identifying the vehicle in the lane video image by using the established lane model and the background model.
Further, the adaptive multi-lane traffic detection system further comprises:
and the background updating module is used for updating the background model by evaluating the average brightness after the background model is established by the background detection module.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
according to the invention, the vehicle detection is carried out by establishing the background model, and meanwhile, the vehicle detection of lane division is realized by establishing the lane model; meanwhile, an average background method is adopted to establish a background model, and the average brightness is evaluated to update the background model, so that the calculated amount is reduced, the robustness of the system is ensured, and the system is resistant to illumination changing along with time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an adaptive multi-lane traffic flow detection method according to the present invention.
FIG. 2 is a flow chart of a method for creating a lane model according to the present invention.
FIG. 3 is a flow chart of a method for establishing a background model according to the present invention.
FIG. 4 is a flowchart of the method for updating background models according to the present invention.
Fig. 5 is a flow chart of a method of the present invention for vehicle identification using a background model and a lane model.
Fig. 6 is a block diagram of an adaptive multi-lane traffic detection system according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
Example 1
The method for detecting the adaptive multi-lane traffic flow based on the video, as shown in fig. 1, includes:
step 1, establishing a lane model and a background model according to an acquired lane video image;
and 2, identifying the vehicle in the lane video image by using the established lane model and the background model.
As shown in fig. 2, in the step 1, the method for establishing the lane model according to the acquired lane video image specifically includes:
step 1.1.1, filtering a lane video image in an HLS color space according to lane line colors;
the common lane line color adopts white and yellow colors, so that the acquired lane video image is subjected to color filtering in an HLS color space.
First, the color ranges of the white and yellow colors in the HLS color space are set:
h min 1 ≤H yellow ≤h max 1 ,s min 1 ≤S yellow ≤s max 1 ,l min 1 ≤L yellow ≤l max 1 ;
h min 2 ≤H white ≤h max 2 ,s min 2 ≤S white ≤s max 2 ,l min 2 ≤L white ≤l max 2 ;
then, binarizing the obtained lane video image in the respective color ranges of the white and yellow colors, and combining the obtained two binary images into a pair of binary images with the white and yellow colors filtered through OR operation. In the practical implementation of the present invention,
the color range of yellow color in HLS color space is desirable:
30≤H yellow ≤60,0.75≤S yellow ≤1.0,0.5≤L yellow ≤0.7;
the color range of the white color in the HLS color space is preferable:
0≤H white ≤360,0.0≤S white ≤0.2,0.95≤L yellow ≤1.0。
step 1.1.2, removing noise from the lane video image filtered in the step 1.1.1 through morphological operation to obtain candidate pixels; specifically, the binary image obtained in the step 1.1.1 and filtered with white and yellow colors is firstly subjected to corrosion operation, and noise points are removed; then image expansion is carried out, so that the influence on the lane lines is reduced. This process may be repeated as many times as desired.
Step 1.1.3, performing straight line fitting on candidate pixels by adopting Hough transformation to obtain candidate straight lines; specifically, in the image with the candidate pixels processed in step 1.1.2, the region of interest is divided by setting a virtual line, so that the interference of other roads is eliminated. According to different positions of the cameras on the lane, the included angle between the virtual line and the image edge is 15-20 degrees. And then, performing straight line fitting on the candidate pixels in the region of interest by using Hough transformation to obtain a cluster of candidate straight lines.
Step 1.1.4, extracting a lane line by adopting a mode of calculating a straight line vanishing point for the candidate straight line; specifically, the vanishing point of a cluster of candidate straight lines obtained in step 1.1.3, i.e. the intersection point above the image, is calculated. And the obtained intersection points are considered to be possibly interfered by other non-lane line straight lines, and the obtained intersection points are set as (x n ,y n ) Selecting the intersection point nearest to the center of the image, i.eWhere width is the width of the image, and a straight line passing through the selected intersection closest to the center of the image is used as the lane line.
And 1.1.5, dividing the lane video image into longitudinal areas corresponding to different lanes by using the extracted lane lines, and establishing a lane model.
As shown in fig. 3, in step 1, a method for establishing a background model according to an acquired lane video image specifically includes:
step 1.2.1, acquiring a front T frame image of a lane video image;
step 1.2.2, after accumulating the previous T frame images, calculating a pixel value average avg; after accumulating the frame differences of the previous T frame images, calculating a frame difference average value diff;
step 1.2.3, establishing a background model with pixel values in the range of (avg-diff) to (avg+diff).
Further, as shown in fig. 4, in step 1, after the background model is established, the background model is updated by evaluating the average brightness, which specifically includes:
step 1.3.1, after a background model is established, calculating and storing the average brightness of the established background model;
step 1.3.2, calculating average brightness of a lane video image acquired subsequently;
and 1.3.3, comparing the difference value between the average brightness of the current background model and the average brightness of the lane video image acquired later, and updating the background model if the difference value is larger than the average brightness set threshold value. The method for generating the background model when updating the background model is consistent with the method for establishing the background model according to the obtained lane video image in the step 1.
In actual use, the average brightness of the current image frame can be calculated after a certain number of frames, the update frequency of the background model is affected by the certain number of frames at intervals, and the average brightness can be reevaluated for each frame according to the performance of an actually adopted device, and can be set appropriately according to factors such as actual illumination change, hardware performance and the like. Similarly, the average brightness setting threshold affects the detection accuracy of vehicle detection using the background model, and may be set according to actual needs.
The method for calculating the average brightness specifically comprises the following steps:
step 1.4.1, converting an image (a current background model or a subsequent lane video image) into a YUV color space, and extracting a Y-channel gray level image;
step 1.4.2, calculating a gray level histogram of the Y-channel gray level image, and judging whether the proportion of the brightness value of the gray level histogram larger than the light spot brightness set threshold exceeds the light spot proportion set threshold or not: if not, taking the average value of the gray level histogram as average brightness; if the brightness value exceeds the preset brightness value, searching a connected domain in the Y-channel gray level image by using the maximum brightness value as a seed, removing the found connected domain from the Y-channel gray level image, and taking the average value of the gray level histogram of the Y-channel gray level image after removing the connected domain as average brightness.
As shown in fig. 5, in step 2, the method for identifying the vehicle in the lane video image by using the established lane model and the background model specifically includes:
step 2.1, carrying out background difference on a current frame of the lane video image and a background model to obtain a foreground target image;
step 2.2, carrying out morphological operation on the foreground target image, searching for a communication region, and taking the searched communication region as a candidate target; specifically, after the opening operation and the closing operation are performed on the foreground target image, binarization is performed, and the area with the same pixel value is used as a communication area.
Step 2.3, identifying the candidate target by the vehicle; specifically, a virtual coil is provided at a certain position of the image, and if the candidate object enters the virtual coil, whether the candidate object is a vehicle is determined based on morphological characteristics of the vehicle (the vehicle is in a lump shape in the processed image, and is generally similar to a rectangle) and duration time of the candidate object in the virtual coil (which can be calculated based on the number of frames of the image, and generally takes a value of 5 to 10 frames).
And 2.4, judging the lane where the identified vehicle is located by using the lane model, and outputting the result of the lane-dividing vehicle identification.
Example 2
Based on the adaptive multi-lane traffic flow detection method provided in embodiment 1, the adaptive multi-lane traffic flow detection system provided in this embodiment, as shown in fig. 6, is connected with a traffic monitoring camera for acquiring a lane video image; in order to ensure the vehicle recognition accuracy, the lane video image acquired by the traffic monitoring camera is preferably a color image, the resolution is preferably higher than 640x480, and the frame rate is preferably higher than 20FPS. The included angle between the traffic monitoring camera and the ground is 30-60 degrees, and the included angle between the traffic monitoring camera and the road direction is not more than 15 degrees.
The adaptive multi-lane traffic detection system includes:
the lane detection module is used for establishing a lane model according to the acquired lane video image; because the traffic monitoring cameras are fixedly arranged, the lane detection module only needs to run once when the system is initialized. It should be noted that if the traffic monitoring camera changes its position due to factors such as overhauling, maintenance, replacement, etc., the lane detection module needs to be restarted;
the background detection module is used for establishing a background model according to the acquired lane video image;
and the vehicle detection module is used for identifying the vehicle in the lane video image by using the established lane model and the background model.
Further, the adaptive multi-lane traffic detection system further comprises:
and the background updating module is used for updating the background model by evaluating the average brightness after the background model is established by the background detection module.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the adaptive multi-lane traffic flow detection system and its functional modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (7)
1. The video-based adaptive multi-lane traffic flow detection method is characterized by comprising the following steps of:
step 1, establishing a lane model and a background model according to an acquired lane video image;
step 2, identifying the vehicle in the lane video image by using the established lane model and the background model;
the method for establishing the lane model according to the acquired lane video image comprises the following steps:
step 1.1.1, filtering the lane video image in the HLS color space according to the lane line color:
first, the color ranges of the white and yellow colors in the HLS color space are set:
h min1 ≤H yellow ≤h max1 ,s min1 ≤S yellow ≤s max1 ,l min1 ≤L yellow ≤l max1 ;
h min2 ≤H white ≤h max2 ,s min2 ≤S white ≤s max2 ,l min2 ≤L white ≤l max2 ;
then, binarizing the obtained lane video image in the respective color ranges of the white color and the yellow color, and combining the obtained two binary images into a pair of binary images with the white color and the yellow color filtered through OR operation;
the yellow color is taken in the color range of the HLS color space:
30≤H yellow ≤60,0.75≤S yellow ≤1.0,0.5≤L yellow ≤0.7;
the white color is taken in the color range of the HLS color space:
0≤H white ≤360,0.0≤S white ≤0.2,0.95≤L yellow ≤1.0;
step 1.1.2, removing noise from the lane video image filtered in the step 1.1.1 through morphological operation to obtain candidate pixels: performing corrosion operation on the binary image with the white and yellow filtered obtained in the step 1.1.1 to remove noise; then image expansion is carried out, so that the influence on lane lines is reduced; this process is repeated a number of times;
step 1.1.3, performing line fitting on candidate pixels by adopting Hough transformation to obtain candidate lines: in the image with candidate pixels processed in the step 1.1.2, a region of interest is divided by setting a virtual line, so that interference of other roads is eliminated; according to different positions of the camera on the lane, the included angle between the virtual line and the image edge is 15-20 degrees; then using Hough transformation to perform straight line fitting on candidate pixels in the region of interest to obtain a cluster of candidate straight lines;
step 1.1.4, extracting a lane line by adopting a mode of calculating a straight line vanishing point for the candidate straight line: calculating vanishing points of the candidate straight lines obtained in the step 1.1.3, namely intersection points above the image; and the obtained intersection points are considered to be possibly interfered by other non-lane line straight lines, and the obtained intersection points are set as (x n ,y n ) Selecting the intersection point nearest to the center of the image, i.eWherein width is the width of the image, and a straight line passing through the selected intersection point closest to the center of the image is taken as a lane line;
step 1.1.5, dividing a lane video image into longitudinal areas corresponding to different lanes by using the extracted lane lines, and establishing a lane model;
in step 1, after the background model is established, the background model is updated by evaluating the average brightness, which specifically includes:
step 1.3.1, after a background model is established, calculating and storing the average brightness of the established background model; calculating the average brightness of the current image frame after every certain frame number;
step 1.3.2, calculating average brightness of a lane video image acquired subsequently;
and 1.3.3, comparing the difference value between the average brightness of the current background model and the average brightness of the lane video image acquired later, and updating the background model if the difference value is larger than the average brightness set threshold value.
2. The adaptive multi-lane traffic flow detection method according to claim 1, wherein in step 1, the method for establishing the background model according to the acquired lane video image specifically comprises:
step 1.2.1, acquiring a front T frame image of a lane video image;
step 1.2.2, after accumulating the previous T frame images, calculating a pixel value average avg; after accumulating the frame differences of the previous T frame images, calculating a frame difference average value diff;
step 1.2.3, establishing a background model with pixel values in the range of (avg-diff) to (avg+diff).
3. The adaptive multi-lane traffic flow detection method according to claim 1, wherein the average brightness calculation method specifically comprises:
step 1.4.1, converting the image into YUV color space, and extracting Y channel gray scale image;
step 1.4.2, calculating a gray level histogram of the Y-channel gray level image, and judging whether the proportion of the brightness value of the gray level histogram larger than the light spot brightness set threshold exceeds the light spot proportion set threshold or not: if not, taking the average value of the gray level histogram as average brightness; if the brightness value exceeds the preset brightness value, searching a connected domain in the Y-channel gray level image by using the maximum brightness value as a seed, removing the found connected domain from the Y-channel gray level image, and taking the average value of the gray level histogram of the Y-channel gray level image after removing the connected domain as average brightness.
4. The adaptive multi-lane traffic detection method according to claim 1, wherein in step 2, the method for identifying the vehicle in the lane video image by using the established lane model and the background model is specifically as follows:
step 2.1, carrying out background difference on a current frame of the lane video image and a background model to obtain a foreground target image;
step 2.2, carrying out morphological operation on the foreground target image, searching for a communication region, and taking the searched communication region as a candidate target;
step 2.3, identifying the candidate target by the vehicle;
and 2.4, judging the lane where the identified vehicle is located by using the lane model, and outputting the result of the lane-dividing vehicle identification.
5. The method for detecting adaptive multi-lane traffic as recited in claim 4, wherein the step 2.3 of identifying the candidate object is specifically: and setting a virtual coil at a certain position of the image, and judging whether the candidate target is a vehicle according to the morphological characteristics of the vehicle and the duration time of the candidate target in the virtual coil if the candidate target enters the virtual coil.
6. An adaptive multi-lane traffic detection system, to which a traffic monitoring camera for acquiring a lane video image is connected, wherein the adaptive multi-lane traffic detection system is configured to implement the adaptive multi-lane traffic detection method according to any one of claims 1 to 5, the adaptive multi-lane traffic detection system comprising:
the lane detection module is used for establishing a lane model according to the acquired lane video image;
the background detection module is used for establishing a background model according to the acquired lane video image;
and the vehicle detection module is used for identifying the vehicle in the lane video image by using the established lane model and the background model.
7. The adaptive multi-lane traffic detection system of claim 6 further comprising:
and the background updating module is used for updating the background model by evaluating the average brightness after the background model is established by the background detection module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910034729.3A CN109766846B (en) | 2019-01-15 | 2019-01-15 | Video-based self-adaptive multi-lane traffic flow detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910034729.3A CN109766846B (en) | 2019-01-15 | 2019-01-15 | Video-based self-adaptive multi-lane traffic flow detection method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109766846A CN109766846A (en) | 2019-05-17 |
CN109766846B true CN109766846B (en) | 2023-07-18 |
Family
ID=66453961
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910034729.3A Active CN109766846B (en) | 2019-01-15 | 2019-01-15 | Video-based self-adaptive multi-lane traffic flow detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109766846B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112150828B (en) * | 2020-09-21 | 2021-08-13 | 大连海事大学 | Method for preventing jitter interference and dynamically regulating traffic lights based on image recognition technology |
CN112562330A (en) * | 2020-11-27 | 2021-03-26 | 深圳市综合交通运行指挥中心 | Method and device for evaluating road operation index, electronic equipment and storage medium |
CN112950662B (en) * | 2021-03-24 | 2022-04-01 | 电子科技大学 | Traffic scene space structure extraction method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002123820A (en) * | 2000-10-17 | 2002-04-26 | Meidensha Corp | Detecting method and device for obstacle being stationary on road obstacle |
TW201349131A (en) * | 2012-05-31 | 2013-12-01 | Senao Networks Inc | Motion detection device and motion detection method |
CN103886598A (en) * | 2014-03-25 | 2014-06-25 | 北京邮电大学 | Tunnel smoke detecting device and method based on video image processing |
CN107895492A (en) * | 2017-10-24 | 2018-04-10 | 河海大学 | A kind of express highway intelligent analysis method based on conventional video |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2243125B1 (en) * | 2007-12-13 | 2020-04-29 | Clemson University Research Foundation | Vision based real time traffic monitoring |
CN101621615A (en) * | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
JP5887067B2 (en) * | 2011-05-20 | 2016-03-16 | 東芝テリー株式会社 | Omnidirectional image processing system |
-
2019
- 2019-01-15 CN CN201910034729.3A patent/CN109766846B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002123820A (en) * | 2000-10-17 | 2002-04-26 | Meidensha Corp | Detecting method and device for obstacle being stationary on road obstacle |
TW201349131A (en) * | 2012-05-31 | 2013-12-01 | Senao Networks Inc | Motion detection device and motion detection method |
CN103886598A (en) * | 2014-03-25 | 2014-06-25 | 北京邮电大学 | Tunnel smoke detecting device and method based on video image processing |
CN107895492A (en) * | 2017-10-24 | 2018-04-10 | 河海大学 | A kind of express highway intelligent analysis method based on conventional video |
Non-Patent Citations (2)
Title |
---|
Adaptive Multicue Background Subtraction for Robust Vehicle Counting and Classification;Luis Unzueta 等;《IEEE Transactions on Intelligent Transportation Systems》;第13卷(第2期);527-540 * |
基于背景重建的运动目标检测与阴影抑制;刘超 等;《计算机工程与应用》;第46卷(第16期);197-199,209 * |
Also Published As
Publication number | Publication date |
---|---|
CN109766846A (en) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI607901B (en) | Image inpainting system area and method using the same | |
Wu et al. | Lane-mark extraction for automobiles under complex conditions | |
US11700457B2 (en) | Flicker mitigation via image signal processing | |
CN104778444B (en) | The appearance features analysis method of vehicle image under road scene | |
CN109766846B (en) | Video-based self-adaptive multi-lane traffic flow detection method and system | |
CN109635758B (en) | Intelligent building site video-based safety belt wearing detection method for aerial work personnel | |
CN110450706B (en) | Self-adaptive high beam control system and image processing algorithm | |
CN111860120B (en) | Automatic shielding detection method and device for vehicle-mounted camera | |
CN106991707B (en) | Traffic signal lamp image strengthening method and device based on day and night imaging characteristics | |
Bedruz et al. | Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach | |
US9030559B2 (en) | Constrained parametric curve detection using clustering on Hough curves over a sequence of images | |
CN110060221B (en) | Bridge vehicle detection method based on unmanned aerial vehicle aerial image | |
CN110276318A (en) | Nighttime road rains recognition methods, device, computer equipment and storage medium | |
CN107122732B (en) | High-robustness rapid license plate positioning method in monitoring scene | |
KR101026778B1 (en) | Vehicle image detection apparatus | |
CN110427979B (en) | Road water pit identification method based on K-Means clustering algorithm | |
Vajak et al. | A rethinking of real-time computer vision-based lane detection | |
CN107346547A (en) | Real-time foreground extracting method and device based on monocular platform | |
CN111046741A (en) | Method and device for identifying lane line | |
CN112149476A (en) | Target detection method, device, equipment and storage medium | |
CN107977608B (en) | Method for extracting road area of highway video image | |
CN104156727A (en) | Lamplight inverted image detection method based on monocular vision | |
CN113989771A (en) | Traffic signal lamp identification method based on digital image processing | |
Xiaolin et al. | Unstructured road detection based on region growing | |
US11354794B2 (en) | Deposit detection device and deposit detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |