CN109766846A - A kind of adaptive multilane vehicle flux monitor method and system based on video - Google Patents

A kind of adaptive multilane vehicle flux monitor method and system based on video Download PDF

Info

Publication number
CN109766846A
CN109766846A CN201910034729.3A CN201910034729A CN109766846A CN 109766846 A CN109766846 A CN 109766846A CN 201910034729 A CN201910034729 A CN 201910034729A CN 109766846 A CN109766846 A CN 109766846A
Authority
CN
China
Prior art keywords
lane
model
video image
image
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910034729.3A
Other languages
Chinese (zh)
Other versions
CN109766846B (en
Inventor
吴春江
严浩
潘鸿韬
王昱
马泊宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
University of Electronic Science and Technology of China
Original Assignee
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd, University of Electronic Science and Technology of China filed Critical SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority to CN201910034729.3A priority Critical patent/CN109766846B/en
Publication of CN109766846A publication Critical patent/CN109766846A/en
Application granted granted Critical
Publication of CN109766846B publication Critical patent/CN109766846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The adaptive multilane vehicle flux monitor method and system based on video that the invention discloses a kind of, which comprises step 1, lane model and background model are established according to the lane video image of acquisition;Step 2, the vehicle in the video image of lane is identified using the lane model of foundation and background model.While the present invention carries out vehicle detection by establishing background model, by the vehicle detection for establishing lane model realization divided lane.

Description

A kind of adaptive multilane vehicle flux monitor method and system based on video
Technical field
The present invention relates to Traffic flow detecting field, especially a kind of adaptive multilane vehicle flux monitor method based on video And system.
Background technique
The acquisition of wagon flow data is the basis of intelligent transportation system, and the acquisition system based on video is widely used.Acquisition system The video data of system input traffic monitoring camera is counted, output timing number from the vehicle identified in road in picture According to.Existing acquisition system can only be acquired for road entirety wagon flow situation.For having the case where a plurality of lane, Bu Nengfen The wagon flow data in each lane are not acquired, and the wagon flow data of divided lane are more useful for there is intelligent transportation system.In addition, existing System for road environment illumination variation can resistance it is lower, influence wagon flow statistics accuracy rate.
Summary of the invention
The technical problems to be solved by the present invention are: in view of the above problems, providing a kind of based on the adaptive of video Multilane vehicle flux monitor method and system are answered, multilane wagon flow data are acquired respectively.
The technical solution adopted by the invention is as follows:
A kind of adaptive multilane vehicle flux monitor method based on video, comprising:
Step 1, lane model and background model are established according to the lane video image of acquisition;
Step 2, the vehicle in the video image of lane is identified using the lane model of foundation and background model.
Further, in step 1, method that lane model is established according to the lane video image of acquisition specifically:
Step 1.1.1 is filtered lane video image in HLS color space according to lane line color;
Step 1.1.2 will remove noise by morphological operation through the filtered lane video image of step 1.1.1, obtain To candidate pixel;
Step 1.1.3 carries out straight line fitting using Hough transform to candidate pixel, obtains candidate straight line;
Step 1.1.4 extracts lane line to candidate straight line by the way of calculating straight line end point;
Lane video image is divided into the vertical of corresponding different lanes by using the lane line extracted by step 1.1.5 To region, lane model is established.
Further, in step 1, method that background model is established according to the lane video image of acquisition specifically:
Step 1.2.1 obtains the preceding T frame image of lane video image;
Step 1.2.2 before adding up after T frame image, calculates pixel value average value avg;Before cumulative after the frame difference of T frame image, Calculate frame difference average value diff;
Step 1.2.3 establishes background model with the pixel value in (avg-diff)~(avg+diff) range.
Further, in step 1, after establishing background model, background model is updated by assessment average brightness, it is specific to wrap It includes:
Step 1.3.1 calculates and stores the average brightness of the background model of foundation after establishing background model;
Step 1.3.2, the average brightness for the lane video image that calculated for subsequent obtains;
Step 1.3.3, the average brightness of more current background model are averaged with the lane video image of subsequent acquisition The difference of brightness updates background model if difference is greater than average brightness given threshold.
Further, the calculation method of the average brightness specifically:
Image is transformed into YUV color space by step 1.4.1, and extracts the channel Y gray level image;
Step 1.4.2, calculates the grey level histogram of the channel Y gray level image, and judges the brightness value of the grey level histogram Whether the ratio greater than hot spot brightness settings threshold value is more than hot spot ratio given threshold: if not exceeded, then by the intensity histogram The mean value of figure is as average brightness;If being more than, maximum brightness value is used to find connection in the gray level image of the channel Y as seed Domain, and the connected domain found is rejected from the gray level image of the channel Y, then by the ash of the channel the Y gray level image after rejecting connected domain The mean value of histogram is spent as average brightness.
Further, in step 2, using foundation lane model and background model to the vehicle in the video image of lane into Row knows method for distinguishing specifically:
Step 2.1, the present frame of lane video image and background model are subjected to background difference, obtain foreground target figure Picture;
Step 2.2, morphological operation is carried out to the foreground target image, then searches connected region, and will find Connected region as candidate target;
Step 2.3, vehicle identification is carried out to candidate target;
Step 2.4, the vehicle place lane recognized using the judgement of lane model, and with the vehicle identification result of divided lane It is exported.
Further, step 2.3, the method for vehicle identification being carried out to candidate target specifically: in the certain position of image Virtual coil is set, if candidate target enters virtual coil according to the morphological feature of vehicle and candidate target in virtual coil Duration judge whether candidate target is vehicle.
A kind of adaptive multilane vehicle flux monitor system is connected with the traffic monitoring camera shooting for obtaining lane video image Head;The adaptive multilane vehicle flux monitor system includes:
Lane detection module, for establishing lane model according to the lane video image of acquisition;
Background detection module, for establishing background model according to the lane video image of acquisition;
Vehicle detection module, for using establish lane model and background model to the vehicle in the video image of lane into Row identification.
Further, the adaptive multilane vehicle flux monitor system, further includes:
Context update module, for after the background detection module establishes background model, more by assessment average brightness New background model.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
While the present invention carries out vehicle detection by establishing background model, by establishing lane model realization divided lane Vehicle detection;Meanwhile background model is established using average background method, and background model is updated by assessment average brightness, While reducing calculation amount, guarantees system robustness, the illumination changed over time can be resisted.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is adaptive multilane vehicle flux monitor method flow diagram of the invention.
Fig. 2 is the method flow diagram for establishing lane model of the invention.
Fig. 3 is the method flow diagram for establishing background model of the invention.
Fig. 4 is the decision flow chart of update background model of the invention.
Fig. 5 is the method flow diagram of the invention that vehicle identification is carried out using background model and lane model.
Fig. 6 is adaptive multilane vehicle flux monitor system structural block diagram of the invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not For limiting the present invention, i.e., described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is logical The component for the embodiment of the present invention being often described and illustrated herein in the accompanying drawings can be arranged and be designed with a variety of different configurations. Therefore, claimed invention is not intended to limit to the detailed description of the embodiment of the present invention provided in the accompanying drawings below Range, but be merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not having Every other embodiment obtained under the premise of creative work is made, shall fall within the protection scope of the present invention.
Feature and performance of the invention are described in further detail with reference to embodiments.
Embodiment 1
A kind of adaptive multilane vehicle flux monitor method based on video provided in this embodiment, as shown in Figure 1, comprising:
Step 1, lane model and background model are established according to the lane video image of acquisition;
Step 2, the vehicle in the video image of lane is identified using the lane model of foundation and background model.
Wherein, as shown in Fig. 2, the step 1, the method for establishing lane model according to the lane video image of acquisition are specific Are as follows:
Step 1.1.1 is filtered lane video image in HLS color space according to lane line color;
Common lane line color uses white, yellow two kinds of colors, and the lane video image that will acquire as a result, is in HLS color Color filtering is carried out in space.
Firstly, white, each leisure HLS color space of yellow color the color gamut of setting:
hmin 1≤Hyellow≤hmax 1,smin 1≤Syellow≤smax 1,lmin 1≤Lyellow≤lmax 1
hmin 2≤Hwhite≤hmax 2,smin 2≤Swhite≤smax 2, lmin 2≤Lwhite≤lmax 2
Then, in the respective color gamut of white, yellow color, binaryzation is carried out to the lane video image of acquisition, will To two width bianry images pass through or operate and merge into a pair and filtered white, yellow color bianry image.In actual implementation,
Yellow color is desirable in the color gamut of HLS color space:
30≤Hyellow≤ 60,0.75≤Syellow≤ 1.0,0.5≤Lyellow≤0.7;
White colour is desirable in the color gamut of HLS color space:
0≤Hwhite≤ 360,0.0≤Swhite≤ 0.2,0.95≤Lyellow≤1.0。
Step 1.1.2 will remove noise by morphological operation through the filtered lane video image of step 1.1.1, obtain To candidate pixel;White, yellow color the bianry image that filtered that specifically step 1.1.1 is obtained first carries out etching operation, removal Noise;Then image expansion is carried out, the influence to lane line is reduced.This process, which can according to need, to be repeated repeatedly.
Step 1.1.3 carries out straight line fitting using Hough transform to candidate pixel, obtains candidate straight line;Specifically, exist By step 1.1.2 treated with candidate pixel image in, by be arranged dummy line model split go out region of interest Domain excludes the interference of other roads.According to the different location of the camera on lane, the angle of dummy line and image border exists Between 15-20 degree.Then Hough transform is used, straight line fitting is carried out to candidate pixel in the region of interest, obtains cluster time Select straight line.
Step 1.1.4 extracts lane line to candidate straight line by the way of calculating straight line end point;Specifically, it calculates The end point for cluster candidate's straight line that step 1.1.3 is obtained, i.e. intersection point above image.And consideration may be by other non-lanes The interference of line straight line, obtained possibility have multiple intersection points, if obtained multiple intersection points are (xn,yn), it chooses nearest from picture centre Intersection point, i.e.,Wherein, width be image width, with by it is selected most from picture centre The straight line of close intersection point is as lane line.
Lane video image is divided into the vertical of corresponding different lanes by using the lane line extracted by step 1.1.5 To region, lane model is established.
Wherein, as shown in figure 3, step 1, the method for establishing background model according to the lane video image of acquisition specifically:
Step 1.2.1 obtains the preceding T frame image of lane video image;
Step 1.2.2 before adding up after T frame image, calculates pixel value average value avg;Before cumulative after the frame difference of T frame image, Calculate frame difference average value diff;
Step 1.2.3 establishes background model with the pixel value in (avg-diff)~(avg+diff) range.
Further, as shown in figure 4, in step 1, after establishing background model, background is updated by assessment average brightness Model specifically includes:
Step 1.3.1 calculates and stores the average brightness of the background model of foundation after establishing background model;
Step 1.3.2, the average brightness for the lane video image that calculated for subsequent obtains;
Step 1.3.3, the average brightness of more current background model are averaged with the lane video image of subsequent acquisition The difference of brightness updates background model if difference is greater than average brightness given threshold.Update background model when background model Generation method and above-mentioned steps 1 in, the method for establishing background model according to the lane video image of acquisition is consistent.
In actual use, the average brightness of current image frame, a framing at interval can be calculated after certain frame number Number influences the renewal frequency of background model, according to the performance of the device actually used, can carry out average brightness with each frame It reappraises, it can also be according to the setting appropriate of the factors such as practical illumination variation, hardware performance.Similarly, average brightness sets threshold Value influences the detection accuracy that vehicle detection is carried out using background model, can also be configured according to actual needs.
Wherein, the calculation method of the average brightness specifically:
It is empty to be transformed into YUV color by step 1.4.1 for image (current background model or subsequent lane video image) Between, and extract the channel Y gray level image;
Step 1.4.2, calculates the grey level histogram of the channel Y gray level image, and judges the brightness value of the grey level histogram Whether the ratio greater than hot spot brightness settings threshold value is more than hot spot ratio given threshold: if not exceeded, then by the intensity histogram The mean value of figure is as average brightness;If being more than, maximum brightness value is used to find connection in the gray level image of the channel Y as seed Domain, and the connected domain found is rejected from the gray level image of the channel Y, then by the ash of the channel the Y gray level image after rejecting connected domain The mean value of histogram is spent as average brightness.
Wherein, as shown in figure 5, step 2, using the lane model and background model of foundation to the vehicle in the video image of lane Carry out knowledge method for distinguishing specifically:
Step 2.1, the present frame of lane video image and background model are subjected to background difference, obtain foreground target figure Picture;
Step 2.2, morphological operation is carried out to the foreground target image, then searches connected region, and will find Connected region as candidate target;Specifically, after carrying out out operation and closed operation to the foreground target image, then two are carried out Value, using the region of same pixel value as connected region.
Step 2.3, vehicle identification is carried out to candidate target;Specifically, virtual coil is set in the certain position of image, if Candidate target enters virtual coil, and then according to the morphological feature of vehicle, (vehicle is in processed image in lumps, general class Like being rectangle) and candidate target in virtual coil duration (can be calculated according to number of image frames, general value 5~ 10 frames) judge whether candidate target is vehicle.
Step 2.4, the vehicle place lane recognized using the judgement of lane model, and with the vehicle identification result of divided lane It is exported.
Embodiment 2
Based on a kind of adaptive multilane vehicle flux monitor method that embodiment 1 provides, one kind provided in this embodiment is adaptive Multilane vehicle flux monitor system is answered, as shown in fig. 6, being connected with the traffic monitoring camera for obtaining lane video image;For Guarantee vehicle identification precision, the lane video image that the preferably described traffic monitoring camera obtains is color image, and resolution ratio is excellent Choosing is higher than 640x480, and frame per second is preferably greater than 20FPS.The traffic monitoring camera and the angle on ground are 30~60 °, with road The angle in road direction is no more than 15 °.
The adaptive multilane vehicle flux monitor system includes:
Lane detection module, for establishing lane model according to the lane video image of acquisition;Since traffic monitoring images Head fixed placement, lane detection module only need to run in system initialization primary.It should be noted that if traffic monitoring is taken the photograph As head is because of shift in position caused by the factors such as maintenance, maintenance, replacement, it is required to rerun lane detection module;
Background detection module, for establishing background model according to the lane video image of acquisition;
Vehicle detection module, for using establish lane model and background model to the vehicle in the video image of lane into Row identification.
Further, the adaptive multilane vehicle flux monitor system, further includes:
Context update module, for after the background detection module establishes background model, more by assessment average brightness New background model.
It is apparent to those skilled in the art that for convenience and simplicity of description, foregoing description it is adaptive The specific work process of multilane vehicle flux monitor system and its each functional module is answered, it can be with reference in preceding method embodiment Corresponding process, details are not described herein.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (9)

1. a kind of adaptive multilane vehicle flux monitor method based on video characterized by comprising
Step 1, lane model and background model are established according to the lane video image of acquisition;
Step 2, the vehicle in the video image of lane is identified using the lane model of foundation and background model.
2. adaptive multilane vehicle flux monitor method according to claim 1, which is characterized in that in step 1, according to acquisition The lane video image method of establishing lane model specifically:
Step 1.1.1 is filtered lane video image in HLS color space according to lane line color;
Step 1.1.2 will remove noise by morphological operation through the filtered lane video image of step 1.1.1, be waited Select pixel;
Step 1.1.3 carries out straight line fitting using Hough transform to candidate pixel, obtains candidate straight line;
Step 1.1.4 extracts lane line to candidate straight line by the way of calculating straight line end point;
Lane video image is divided into the longitudinal region in corresponding different lanes by using the lane line extracted by step 1.1.5 Lane model is established in domain.
3. adaptive multilane vehicle flux monitor method according to claim 1, which is characterized in that in step 1, according to acquisition The lane video image method of establishing background model specifically:
Step 1.2.1 obtains the preceding T frame image of lane video image;
Step 1.2.2 before adding up after T frame image, calculates pixel value average value avg;Before cumulative after the frame difference of T frame image, calculate Frame difference average value diff;
Step 1.2.3 establishes background model with the pixel value in (avg-diff)~(avg+diff) range.
4. adaptive multilane vehicle flux monitor method according to claim 1, which is characterized in that in step 1, carried on the back establishing After scape model, background model is updated by assessment average brightness, is specifically included:
Step 1.3.1 calculates and stores the average brightness of the background model of foundation after establishing background model;
Step 1.3.2, the average brightness for the lane video image that calculated for subsequent obtains;
The average brightness of the lane video image of step 1.3.3, the average brightness of more current background model and subsequent acquisition Difference, if difference be greater than average brightness given threshold, update background model.
5. adaptive multilane vehicle flux monitor method according to claim 4, which is characterized in that the meter of the average brightness Calculation method specifically:
Image is transformed into YUV color space by step 1.4.1, and extracts the channel Y gray level image;
Step 1.4.2, calculates the grey level histogram of the channel Y gray level image, and judges that the brightness value of the grey level histogram is greater than Whether the ratio of hot spot brightness settings threshold value is more than hot spot ratio given threshold: if not exceeded, then by the grey level histogram Mean value is as average brightness;If being more than, maximum brightness value is used to find connected domain in the gray level image of the channel Y as seed, And the connected domain found is rejected from the gray level image of the channel Y, it is then that the gray scale of the channel the Y gray level image after rejecting connected domain is straight The mean value of square figure is as average brightness.
6. adaptive multilane vehicle flux monitor method according to claim 1, which is characterized in that in step 2, utilize foundation Lane model and background model knowledge method for distinguishing is carried out to the vehicle in the video image of lane specifically:
Step 2.1, the present frame of lane video image and background model are subjected to background difference, obtain foreground target image;
Step 2.2, morphological operation is carried out to the foreground target image, then searches connected region, and the company that will be found Lead to region as candidate target;
Step 2.3, vehicle identification is carried out to candidate target;
Step 2.4, the vehicle place lane recognized using the judgement of lane model, and with the progress of the vehicle identification result of divided lane Output.
7. adaptive multilane vehicle flux monitor method according to claim 6, which is characterized in that step 2.3, to candidate mesh The method that mark carries out vehicle identification specifically: virtual coil is set in the certain position of image, if candidate target enters dummy line Circle then judges whether candidate target is vehicle according to the duration of the morphological feature of vehicle and candidate target in virtual coil.
8. a kind of adaptive multilane vehicle flux monitor system is connected with the traffic monitoring camera shooting for obtaining lane video image Head, which is characterized in that the adaptive multilane vehicle flux monitor system includes:
Lane detection module, for establishing lane model according to the lane video image of acquisition;
Background detection module, for establishing background model according to the lane video image of acquisition;
Vehicle detection module, for being known using the lane model and background model of foundation to the vehicle in the video image of lane Not.
9. adaptive multilane vehicle flux monitor system according to claim 8, which is characterized in that further include:
Context update module, for updating back by assessment average brightness after the background detection module establishes background model Scape model.
CN201910034729.3A 2019-01-15 2019-01-15 Video-based self-adaptive multi-lane traffic flow detection method and system Active CN109766846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910034729.3A CN109766846B (en) 2019-01-15 2019-01-15 Video-based self-adaptive multi-lane traffic flow detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910034729.3A CN109766846B (en) 2019-01-15 2019-01-15 Video-based self-adaptive multi-lane traffic flow detection method and system

Publications (2)

Publication Number Publication Date
CN109766846A true CN109766846A (en) 2019-05-17
CN109766846B CN109766846B (en) 2023-07-18

Family

ID=66453961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910034729.3A Active CN109766846B (en) 2019-01-15 2019-01-15 Video-based self-adaptive multi-lane traffic flow detection method and system

Country Status (1)

Country Link
CN (1) CN109766846B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150828A (en) * 2020-09-21 2020-12-29 大连海事大学 Method for preventing jitter interference and dynamically regulating traffic lights based on image recognition technology
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
CN112950662A (en) * 2021-03-24 2021-06-11 电子科技大学 Traffic scene space structure extraction method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002123820A (en) * 2000-10-17 2002-04-26 Meidensha Corp Detecting method and device for obstacle being stationary on road obstacle
WO2009076182A1 (en) * 2007-12-13 2009-06-18 Clemson University Vision based real time traffic monitoring
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
JP2012244479A (en) * 2011-05-20 2012-12-10 Toshiba Teli Corp All-round monitored image processing system
TW201349131A (en) * 2012-05-31 2013-12-01 Senao Networks Inc Motion detection device and motion detection method
CN103886598A (en) * 2014-03-25 2014-06-25 北京邮电大学 Tunnel smoke detecting device and method based on video image processing
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002123820A (en) * 2000-10-17 2002-04-26 Meidensha Corp Detecting method and device for obstacle being stationary on road obstacle
WO2009076182A1 (en) * 2007-12-13 2009-06-18 Clemson University Vision based real time traffic monitoring
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method
JP2012244479A (en) * 2011-05-20 2012-12-10 Toshiba Teli Corp All-round monitored image processing system
TW201349131A (en) * 2012-05-31 2013-12-01 Senao Networks Inc Motion detection device and motion detection method
CN103886598A (en) * 2014-03-25 2014-06-25 北京邮电大学 Tunnel smoke detecting device and method based on video image processing
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
LUIS UNZUETA 等: "Adaptive Multicue Background Subtraction for Robust Vehicle Counting and Classification", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》, vol. 13, no. 2, pages 527 - 540, XP011445680, DOI: 10.1109/TITS.2011.2174358 *
付永春: "单目视觉结构化道路车道线检测和跟踪技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
付永春: "单目视觉结构化道路车道线检测和跟踪技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》, no. 7, 15 July 2012 (2012-07-15), pages 138 - 2212 *
刘超 等: "基于背景重建的运动目标检测与阴影抑制", 《计算机工程与应用》, vol. 46, no. 16, pages 197 - 199 *
戴晶华 等: "多车道视频车流量检测和计数", 《国外电子测量技术》 *
戴晶华 等: "多车道视频车流量检测和计数", 《国外电子测量技术》, vol. 35, no. 10, 31 October 2016 (2016-10-31), pages 30 - 33 *
王妍: "智能交通系统中车流量检测技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *
王妍: "智能交通系统中车流量检测技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》, no. 5, 15 May 2015 (2015-05-15), pages 034 - 258 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150828A (en) * 2020-09-21 2020-12-29 大连海事大学 Method for preventing jitter interference and dynamically regulating traffic lights based on image recognition technology
CN112150828B (en) * 2020-09-21 2021-08-13 大连海事大学 Method for preventing jitter interference and dynamically regulating traffic lights based on image recognition technology
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
CN112950662A (en) * 2021-03-24 2021-06-11 电子科技大学 Traffic scene space structure extraction method
CN112950662B (en) * 2021-03-24 2022-04-01 电子科技大学 Traffic scene space structure extraction method

Also Published As

Publication number Publication date
CN109766846B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US8184859B2 (en) Road marking recognition apparatus and method
US8280106B2 (en) Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof
CN103763515B (en) A kind of video abnormality detection method based on machine learning
CN109766846A (en) A kind of adaptive multilane vehicle flux monitor method and system based on video
CN103679733B (en) A kind of signal lamp image processing method and its device
CN103235938A (en) Method and system for detecting and identifying license plate
CN106991707B (en) Traffic signal lamp image strengthening method and device based on day and night imaging characteristics
EP3036714B1 (en) Unstructured road boundary detection
CN104700430A (en) Method for detecting movement of airborne displays
CN105117726B (en) License plate locating method based on multiple features zone-accumulation
KR101204259B1 (en) A method for detecting fire or smoke
CN104021527B (en) Rain and snow removal method in image
CN107122732B (en) High-robustness rapid license plate positioning method in monitoring scene
CN103106796A (en) Vehicle detection method and device of intelligent traffic surveillance and control system
CN104778723A (en) Method for performing motion detection on infrared image with three-frame difference method
KR101026778B1 (en) Vehicle image detection apparatus
CN103021179A (en) Real-time monitoring video based safety belt detection method
CN103729828A (en) Video rain removing method
KR100965800B1 (en) method for vehicle image detection and speed calculation
CN102724541B (en) Intelligent diagnosis and recovery method for monitoring images
CN104463812B (en) The method for repairing the video image by raindrop interference when shooting
CN110688979A (en) Illegal vehicle tracking method and device
CN111382736B (en) License plate image acquisition method and device
CN111145219B (en) Efficient video moving target detection method based on Codebook principle
CN108961357A (en) A kind of excessively quick-fried image intensification method and device of traffic lights

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant