CN111915883A - Road traffic condition detection method based on vehicle-mounted camera shooting - Google Patents

Road traffic condition detection method based on vehicle-mounted camera shooting Download PDF

Info

Publication number
CN111915883A
CN111915883A CN202010556147.4A CN202010556147A CN111915883A CN 111915883 A CN111915883 A CN 111915883A CN 202010556147 A CN202010556147 A CN 202010556147A CN 111915883 A CN111915883 A CN 111915883A
Authority
CN
China
Prior art keywords
vehicle
lane
image
vehicles
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010556147.4A
Other languages
Chinese (zh)
Inventor
蔺杰
黄勇
赵鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010556147.4A priority Critical patent/CN111915883A/en
Publication of CN111915883A publication Critical patent/CN111915883A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The invention discloses a road traffic condition detection method based on vehicle-mounted camera shooting, which fully considers the dynamic property and uncertainty of traffic in a road theory by utilizing a deep learning-based vehicle detection theory. The traffic condition estimation is used for assisting the vehicles to dynamically describe the current traffic condition, so that the traffic efficiency in intelligent traffic is improved. The invention provides a traffic condition detection method based on vehicle-mounted camera shooting aiming at an intelligent traffic system, so as to assist a vehicle to estimate traffic conditions through traffic information collected by vehicle-mounted camera shooting equipment. By integrating with edge computing and V2X networks, additional computing and communication resources are available to accurately estimate the traffic conditions that the vehicle arrives at. The invention provides a road traffic condition detection method based on vehicle-mounted camera shooting to count the number of vehicles running on different lanes. And finally, estimating the fine granularity of the road traffic condition.

Description

Road traffic condition detection method based on vehicle-mounted camera shooting
Technical Field
The invention belongs to the field of deep learning and intelligent traffic, and particularly relates to a road traffic condition detection method based on vehicle-mounted camera shooting.
Background
The intelligent traffic is rapidly developing, and the technology applied to the automobile field is rapidly changing. The anti-collision early warning system is mainly used for assisting a driver to avoid major traffic accidents such as high-speed and low-speed rear-end collisions, unconscious lane departure at high speed, collision with pedestrians and the like. Various researchers are also investing large amounts of resources to conduct research. Much work has been done on vehicle detection and tracking, most of which assume that the camera is static for vehicle detection. The advanced assistant driving system is mainly used for assisting a driver to avoid serious traffic accidents such as high-speed and low-speed rear-end collisions, unconscious lane departure at high speed, collision with pedestrians and the like. However, for accidents occurring on expressways, rural roads and roads far away from urban areas, because the number of monitoring cameras is small, and most of the monitoring cameras adopt fixed shooting, the accidents cannot be effectively monitored and traffic can be directed. The driver and the driver can not obtain the traffic condition of the road ahead in time. At present, a driver does not share vehicle information to a cloud, and under the application scene of the internet of vehicles, vehicles can be directly communicated with one another to acquire vehicle information and know current traffic condition information, so that the driver or the vehicle with the automatic driving function can select more suitable driving behaviors and routes. The system comprises a large system network for carrying out wireless communication and information exchange among V2V, V2R and V2I according to an agreed communication protocol and a data interaction standard through electronic components such as GPS positioning, RFID identification, a sensor, a camera and image processing which are integrated on an automobile. And modern communication and network technologies are fused, so that the exchange and sharing of intelligent information such as vehicle, people, vehicle, road, background and the like are realized, and the intelligent vehicle has the functions of complex environment perception, intelligent decision, cooperative control, execution and the like. With the continuous improvement of living standard and economic income of people, automobiles become necessities which are born by people, vehicles running on roads are more and more, and not only the urban traffic condition is gradually congested, but also the road traffic far away from the urban area is in tension. However, effective detection and vehicle statistics cannot be performed on the highway, the rural road, the road far away from the urban area and other road sections with few fixed cameras.
Disclosure of Invention
The invention provides a road traffic condition detection method based on vehicle-mounted camera shooting, which takes a vehicle as an edge calculation node, senses the surrounding environment through a vehicle-mounted camera and records the surrounding environment so as to accurately estimate the current traffic condition and feed an accurate estimation result back to a driver. A model trained using a vehicle data set. Detecting vehicle information in the recorded video stream by using a target detection algorithm YOLOv3 based on deep learning, carrying out frame extraction on the video stream, detecting and identifying vehicles in the image frame by frame, and drawing bounding boxes for the predicted vehicles.
For the road traffic assessment on each lane, since the vehicle-mounted camera is fixed relative to the vehicle, and the left and right positions of the vehicle relative to the lane are also basically fixed, the lane is basically kept in a fixed area in the video recorded by the vehicle-mounted camera, an interested area (ROI) is set for the image extracted from the video, image processing technologies such as gray level conversion, Gaussian filtering, Canny edge detection and the like are utilized, then lines in the image are found by combining with Hough conversion, a left lane and a right lane are calculated according to the obtained lines, and the recognized vehicle is divided into the left lane, a middle lane and the right lane.
According to a vehicle detection algorithm, vehicle bounding boxes are predicted, the bounding boxes in each frame of image are counted, the counted number is the number of vehicles in the current frame, and the vehicle density in a certain moment is calculated by combining road width and length information. The traffic density on a road is reflected by the size of the vehicle density, and for a road with three lanes running in the same direction, the comfort and the flexibility of a driver for operating the vehicle are approximately measured according to the single-lane estimation and finally the traffic condition of a road section running in the same direction. The levels of traffic control, accident detection and route planning can be further improved.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a road traffic condition detection method based on vehicle-mounted camera shooting comprises the following steps of (1) vehicle-mounted camera shooting calibration: according to the imaging principle of monocular vehicle-mounted camera equipment, the vehicle-mounted camera calibration is to extract measurement information from a two-dimensional image in three-dimensional computer vision;
(2) vehicle detection on the traffic lane: detecting and identifying vehicles running on a road by using a deep learning-based target detection algorithm YOLOv 3;
(3) detecting a driving lane: fitting the lane line by using a Hough transformation algorithm and a straight line model;
(4) vehicle statistics: according to the positions and the categories of the image data in the output regression bounding boxes, carrying out statistics one by one;
(5) traffic estimation: the traffic condition on the driving route is estimated by combining the detection and the statistical result of the driving road section of the vehicle and the novel estimation model provided by the invention.
A road traffic condition detection method based on vehicle-mounted camera shooting comprises the following steps:
step 1, vehicle-mounted camera calibration: according to the imaging principle of monocular vehicle-mounted camera equipment, the vehicle-mounted camera calibration is to extract measurement information from a two-dimensional image in three-dimensional computer vision;
step 2, detecting vehicles on the traffic lane: detecting and identifying vehicles running on a road by using a deep learning target detection algorithm YOLOv 3;
step 3, detecting a driving lane: detecting and fitting the lane line by using a Hough transformation algorithm and adopting a straight line model;
step 4, vehicle statistics: judging lanes to which the image data belong according to the positions of the output regression bounding boxes and the categories of the image data, and then counting one by one;
step 5, traffic estimation: according to the results of vehicle detection and statistics, a mathematical model is provided in combination with the estimated traffic conditions on the driving route, and the vehicle density on the road can be used as an effective index of the traffic conditions in the traffic system to describe the road traffic conditionsv,iThe statistical result of the detected and recognized vehicles on the lane i is recorded asNden,iDetermining the density of vehicles on lane i, and recording as Vden,iLane width is denoted as Wlane,SplimitRepresenting a limited travel speed on the road;
Figure RE-GDA0002619506180000041
indicating a maximum vehicle density that results in poor traffic conditions.
Step 1 is as follows: setting coordinate points of a three-dimensional world as
Figure RE-GDA0002619506180000042
Two-dimensional camera plane pixel coordinates are
Figure RE-GDA0002619506180000043
Calibration formula of vehicle-mounted camera equipment
Figure RE-GDA0002619506180000044
Where s is a scale factor and A represents an intrinsic parameter of the imaging apparatus, [ R t ]]Representing extrinsic parameters, where R is a rotation matrix, t is a translation vector, [ R t ]]And (3) carrying out coordinate transformation on the points (X, Y and Z), wherein the coordinate system is fixed and unchangeable relative to the camera, and calibrating the vehicle-mounted camera to obtain camera parameters.
Step 2 is as follows: sensing the surrounding environment through vehicle-mounted camera equipment, recording, detecting vehicle information in a recorded video by using a model trained by a vehicle data set and utilizing a target detection algorithm YOLOv3 based on deep learning, carrying out frame extraction on the video, detecting and identifying vehicles in an image frame by frame, and drawing bounding boxes for the predicted vehicles.
Step 3 is as follows: based on the Hough transformation feature extraction technology, an object with a specific shape is detected, an original space is hidden to a parameter space, an image form in the parameter space is obtained according to a voting mode, Hough transformation parameter definition is defined, rho: distance resolution, which is the distance from a straight line to the image origin (0,0) point, in units of pixels of a Hough grid; angular resolution in units of Hough grid radians; threshold: judging that the intersection in the Hough grid unit is a straight line according to the value in the minimum vote number accumulator; min _ line _ length: forming a line with the minimum pixel number, and finding out the shortest length of the line; max _ line _ gap: the maximum gap of the pixels between the connectable line segments, the maximum interval between two straight lines, which is smaller than this value, is considered as a straight line;
marking the position of a lane line in an image, and performing gray level transformation, Gaussian smoothing, Canny edge detection and mask processing on an original image to obtain a final Hough image, wherein in a single-frame image, d is the width of a road in the image; wimageIs the image width; himageIs the image height.
Step 4 is as follows: extracting frames of videos recorded by the vehicle-mounted camera equipment, counting vehicles on the images, marking the number of vehicles in each frame of image to represent the number of vehicles in the current driving state, marking vehicles belonging to a left lane, a middle lane and a right lane according to the positions of the lane lines marked in the step 3 in the images, and respectively counting (x)i1,yi1),(xi2,yi2),(xi3,yi3),(xi4,yi4) Is the frame coordinates of the ith vehicle in the image;
1) the center coordinates of the vehicle can be calculated through the coordinates of two diagonal points of the frame
Figure RE-GDA0002619506180000051
2) Left lane line: k, lane ═ kl·x+b1(ii) a Wherein k islIs the slope of the left lane in the image;
3) right lane line: k represents an integer of rlane ═ kr·x+b2(ii) a Wherein k isrIs the slope of the right lane in the image;
4) judging whether the vehicle belongs to a left lane, a middle lane and a right lane according to the position of the coordinate:
(1)coordvehicle>llane:Nden,lane=llanevehiclethe number of left lane vehicles is increased by 1;
(2)llane<coordvehicle<rlane:Nden,mlane=mlanevehiclethe middle lane is increased by 1;
(3)rlane<coordvehicle:Nden,rlane=rlanevehicle+ +; the right lane vehicle is increased by 1;
wherein N isden,laneThe number of vehicles running on the left lane; n is a radical ofden,mlaneThe number of vehicles running on the middle lane; n is a radical ofden,mlaneThe number of vehicles traveling in the right lane.
Step 5 is as follows: establishing a relation between a three-dimensional real detection environment and a two-dimensional image according to the steps 1, 2, 3 and 4, wherein three coordinate systems are mainly provided, namely an image coordinate system uO3v, with O2Coordinate system of imaging device as origin, world coordinate system XO1Y, points in world coordinates are proportional to points in image coordinates imaged through the optical axis, and a pixel point O on the image centered on the lens of the image pickup device3With its actual point D in world coordinates, O can be obtained1E, height g of the camera device, and distance O between the world coordinate point corresponding to the image coordinate center and the camera on the y axis1D, image coordinates (u) of center point of lens of vehicle-mounted image pickup device0,v0) Combining the image coordinate E' of the measurement pixel point with the length h of the actual pixelpixWidth w of real pixelpixThe vehicle-mounted camera focal length f is as follows:
Figure RE-GDA0002619506180000061
Figure RE-GDA0002619506180000062
Figure RE-GDA0002619506180000063
O1e is set as the estimated distance range detected, and the density of vehicles on the road can be used as the traffic condition in the traffic systemAnd the effective index is used for describing the road traffic condition, according to the method, the detectable distance of the lane i is marked as D based on the vehicle counting and coordinate transformation method of the lanev,iThe statistical result of the vehicles detected and recognized on the lane i is recorded as Nden,iDetermining the density of vehicles on lane i, and recording as Vden,iLane width is denoted as WlaneThe density of vehicles on lane i is expressed as:
Figure RE-GDA0002619506180000064
according to the traffic estimation model in the invention, and introduced to represent traffic conditions, it can be represented as: a
Figure RE-GDA0002619506180000065
Wherein SplimitRepresenting a limited travel speed on the road;
Figure RE-GDA0002619506180000066
indicating a maximum vehicle density that results in poor traffic conditions.
The invention utilizes the video recorded by the vehicle-mounted camera to estimate the road traffic condition, carries out vehicle detection through the image extracted by the video frame, counts according to the detected vehicle information, and then estimates the vehicle density of the driving road section, wherein the vehicle density influences the traffic capacity of the road. Vehicle information is obtained from the static image, and the current frame is used for estimating the vehicle density by counting how many vehicles are detected. And higher accuracy can be achieved for counting vehicles running in the same direction. The traffic capacity of the current road section is reflected by the density of the vehicles, and meanwhile, the traffic condition of the current driving road section is dynamically described. According to the collected information, the driver can re-plan his driving route. This result has a great advantage in more accurate and complex road evaluation compared to the inability of traffic surveillance cameras to cover all roads, accounting for road utilization in the V2X scenario.
Drawings
FIG. 1 is a schematic diagram illustrating a road traffic condition estimation application in a physical fusion system according to the present invention;
FIG. 2 is a flowchart of a road traffic condition detection method based on vehicle-mounted camera shooting;
FIG. 3 is a schematic model diagram of the vehicle-mounted camera and road surface imaging principle;
FIG. 4 is a road scene diagram of a vehicle position determination and estimation model;
Detailed Description
In order to better explain the technical solution of the present invention, the following detailed description will be made with reference to the accompanying drawings, which are only a part of the embodiments of the present invention and are not intended to be a complete representation of the embodiments of the present invention.
As shown in fig. 1, the present invention includes the following parts: detecting and identifying the vehicle; detecting a traffic lane; counting vehicles; and (6) traffic estimation. The present invention trains models using a vehicle data set, including the quantity of vehicle data and annotation data. After the training process is finished, a weight file is generated, the image extracted from the video is detected by using a Yolov3 target detection algorithm and the weight, and the position of the image is obtained. During testing, 10 seconds of the video shot for several minutes is captured, and as the video is also composed of image information, the video frames in the video are extracted, and the accuracy rate of vehicle detection and identification reaches 90%. During the test, the range of vehicle detection is about 100 meters, and vehicles at greater distances cannot always be detected.
As shown in fig. 2, the vehicle-mounted image pickup apparatus mounted on the vehicle often has a certain height, and is implemented in a state of being viewed from above when viewed from above and having a certain angle. Road traffic video is captured through vehicle-mounted camera shooting, and then image information is extracted and processed from video frames. The method comprises the steps of detecting and identifying vehicles, simultaneously detecting the number of vehicles, detecting lanes and then estimating the road traffic condition. In order to calculate the number of vehicles in an effective range, firstly, the conversion relation between the distance between two points in an image and the actual distance of the real world is known, in the method provided by the invention, monocular vehicle-mounted camera equipment is adopted, and the real distance is estimated through proportion according to the imaging principle, so that the vehicle density in a specific range can be estimated. For a monocular camera device model, one view projects points in a three-dimensional space to an image plane through perspective transformation, and the formula is as follows by combining a calibration method:
Figure RE-GDA0002619506180000081
the coordinate point of the three-dimensional world is
Figure RE-GDA0002619506180000082
Two-dimensional camera plane pixel coordinates are
Figure RE-GDA0002619506180000083
Where s is a scale factor and a is an internal parameter of the imaging apparatus. [ R t]Is an external parameter, where R is a rotation matrix and t is a translation vector. And calibrating the vehicle-mounted camera equipment to obtain the camera parameters.
As shown in fig. 3, a relationship between the environment of the three-dimensional real driving road and the two-dimensional image is established. There are mainly three coordinate systems, respectively image coordinate system uo3v, in o2Coordinate system of camera device, world coordinate system XO, as origin1And Y. Points in world coordinates are imaged on points in image coordinates in proportion through an optical axis, and a pixel point O of the center of a lens of the vehicle-mounted camera on the image3And the actual point D of the image and the world coordinate can obtain O through the conversion of the pixel and the actual distance1E. The height g of the camera device, the distance O between the world coordinate point corresponding to the image coordinate center and the camera device on the y axis1D, image coordinates of lens center point (u)0,v0) Since the vehicle-mounted imaging apparatus is generally installed with an error, but since the vehicle-mounted imaging apparatus center point is not necessarily the midpoint of the image, u is0And v0Not necessarily 0, the following formula can be obtained:
Figure RE-GDA0002619506180000091
Figure RE-GDA0002619506180000092
Figure RE-GDA0002619506180000093
however, in the invention, the image coordinate E' of the pixel point is measured, and the length h of the actual pixel is measuredpixWidth w of real pixelpixFocal length f, O of the onboard camera1E is set as the estimated distance of detection.
As shown in fig. 4, h is the distance from the straight line point to the lower edge of the image; v is the straight line vanishing point, and the position in the image is represented as (x)0,y0) (ii) a d is the width of the road in the image; wimageIs the image width; himageIs the image height; and respectively carrying out statistics. (x)i1,yi1),(xi2,yi2),(xi3,yi3),(xi4,yi4) Is the coordinates of the frame of the ith vehicle in the image.
1) The center coordinates of the vehicle can be calculated through the coordinates of two diagonal points of the frame
Figure RE-GDA0002619506180000094
2) Left lane line: k, lane ═ kl·x+b1Wherein k islIs the slope of the left lane in the image;
3) right lane line: k represents an integer of rlane ═ kr·x+b2Wherein k isrIs the slope of the right lane in the image;
4) judging whether the vehicle belongs to a left lane, a middle lane and a right lane according to the position of the coordinate:
(1)coordvehicle>llane:Nden,lane=llanevehiclethe number of vehicles in the left lane is increased by 1; n is a radical ofden,laneThe number of vehicles running on the left lane;
(2)llane<coordvehicle<rlane::Nden,mlane=mlanevehiclethe number of vehicles in the middle lane is increased by 1; n is a radical ofden,mlaneThe number of vehicles running on the middle lane;
(3)rlane<coordvehicle:Nden,rlane=rlanevehicle+ +; the right lane vehicle is increased by 1. N is a radical ofden,mlaneThe number of vehicles traveling in the right lane.
The video recorded by the vehicle-mounted camera equipment is also composed of pictures, and if the lane on the image can be successfully detected, the lane to which the vehicle belongs can be successfully judged. Firstly, frame extraction is carried out on the recorded driving video to obtain images in continuous time. The method comprises the steps of carrying out gray level transformation on an original RGB image, and based on a Canny Edge Detection technology, having two important parameters of low _ threh and high _ threh. By comparing the gradient with the off between low _ threh and high _ threh, if the gradient is>high _ threh, this is an edge point; if gradient<high _ threh, then edgepoint. After Canny edge detection, the method can observe that the outline of the lane is detected. At this point, by setting the ROI, the edges outside the ROI are filtered out. And dividing a certain line to belong to a left lane or a right lane by calculating the positive and negative of the slope, and finally, explicitly drawing the line on an image to obtain a final lane. In China, the width of each motor vehicle lane on the highway section of an expressway and an urban expressway is generally 3.75-4 m, the minimum width is not less than 3m, and the number of vehicles occupied in a unit area, namely the traffic state at the moment, is estimated according to the area of each driving area surrounded by lane lines. The density of vehicles on the road can be used as an effective indicator of the traffic situation in the traffic system and also as a road traffic estimate, according to the above method, the number of vehicles counted on the basis of the lane information, and the estimated distance detected on lane i, denoted as Dv,iThe statistical result of the vehicles detected and recognized on the lane i is recorded as Nden,iCan determine the vehicle on the lane iDensity, noted as Vden,iLane width is denoted as WlaneThe density of vehicles on lane i is expressed as:
Figure RE-GDA0002619506180000101
according to a novel traffic estimation model, the scheme of the invention introduces a model to represent traffic conditions, which is represented as:
Figure RE-GDA0002619506180000102
according to the detection result obtained in the invention, the traffic state of the current driving road section is estimated. SplimitRepresenting a limited travel speed on the road;
Figure RE-GDA0002619506180000111
indicating a maximum vehicle density that results in poor traffic conditions.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. A road traffic condition detection method based on vehicle-mounted camera shooting is characterized by comprising the following steps:
step 1, vehicle-mounted camera calibration: according to the imaging principle of monocular vehicle-mounted camera equipment, the vehicle-mounted camera calibration is to extract measurement information from a two-dimensional image in three-dimensional computer vision;
step 2, detecting vehicles on the traffic lane: detecting and identifying vehicles running on a road by using a deep learning target detection algorithm YOLOv 3;
step 3, detecting a driving lane: detecting and fitting the lane line by using a Hough transformation algorithm and adopting a straight line model;
step 4, vehicle statistics: judging lanes to which the image data belong according to the positions of the output regression bounding boxes and the categories of the image data, and then counting one by one;
step 5, traffic estimation: according to the results of vehicle detection and statistics, a mathematical model is provided in combination with the estimated traffic conditions on the driving route, and the vehicle density on the road can be used as an effective index of the traffic conditions in the traffic system to describe the road traffic conditionsv,iThe statistical result of the vehicles detected and recognized on the lane i is recorded as Nden,iDetermining the density of vehicles on lane i, and recording as Vden,iLane width is denoted as Wlane,SplimitRepresenting a limited travel speed on the road;
Figure FDA0002544391230000011
indicating a maximum vehicle density that results in poor traffic conditions.
2. The method for detecting the road traffic condition based on the vehicle-mounted camera shooting as claimed in claim 1, wherein the step 1 is as follows:
setting coordinate points of the three-dimensional world as M ═ X, Y, Z,1]TThe two-dimensional camera plane pixel coordinate is m ═ u, v,1]TThe calibration formula of the vehicle-mounted imaging apparatus is sm ═ a [ R t ═ a []M, where s is a scale factor and A represents an intrinsic parameter of the image pickup apparatus, [ R t ]]Representing extrinsic parameters, where R is a rotation matrix, t is a translation vector, [ R t ]]And (3) carrying out coordinate transformation on the points (X, Y and Z), wherein the coordinate system is fixed and unchangeable relative to the camera, and calibrating the vehicle-mounted camera to obtain camera parameters.
3. The method for detecting the road traffic condition based on the vehicle-mounted camera as claimed in claim 1, wherein the step 2 is that: sensing the surrounding environment through vehicle-mounted camera equipment, recording, detecting vehicle information in a recorded video by using a model trained by a vehicle data set and utilizing a target detection algorithm YOLOv3 based on deep learning, carrying out frame extraction on the video, detecting and identifying vehicles in an image frame by frame, and drawing bounding boxes for the predicted vehicles.
4. The method for detecting the road traffic condition based on the vehicle-mounted camera as claimed in claim 1, wherein the step 3 is that: based on the Hough transformation feature extraction technology, an object with a specific shape is detected, an original space is hidden to a parameter space, an image form in the parameter space is obtained according to a voting mode, Hough transformation parameter definition is defined, rho: distance resolution, which is the distance from a straight line to the image origin (0,0) point, in units of pixels of a Hough grid; angular resolution in units of Hough grid radians; threshold: judging that the intersection in the Hough grid unit is a straight line according to the value in the minimum vote number accumulator; min _ line _ length: forming a line with the minimum pixel number, and finding out the shortest length of the line; max _ line _ gap: the maximum gap of the pixels between the connectable line segments, the maximum interval between two straight lines, which is smaller than this value, is considered as a straight line;
marking the position of a lane line in an image, and performing gray level transformation, Gaussian smoothing, Canny edge detection and mask processing on an original image to obtain a final Hough image, wherein in a single-frame image, d is the width of a road in the image; wimageIs the image width; himageIs the image height.
5. The method for detecting the road traffic condition based on the vehicle-mounted camera as claimed in claim 1, wherein the step 4 is that: extracting frames of videos recorded by the vehicle-mounted camera equipment, counting vehicles on the images, wherein the number of vehicles in each frame of image represents the number of vehicles in the current driving state, and marking out the vehicles belonging to a left lane, a middle lane and a right lane according to the positions of the lane lines marked in the step 3 in the imagesSeparately, make statistics of (x)i1,yi1),(xi2,yi2),(xi3,yi3),(xi4,yi4) Is the frame coordinates of the ith vehicle in the image;
1) the center coordinates of the vehicle can be calculated through the coordinates of two diagonal points of the frame
Figure FDA0002544391230000031
2) Left lane line: k, lane ═ kl·x+b1(ii) a Wherein k islIs the slope of the left lane in the image;
3) right lane line: k represents an integer of rlane ═ kr·x+b2(ii) a Wherein k isrIs the slope of the right lane in the image;
4) judging whether the vehicle belongs to a left lane, a middle lane and a right lane according to the position of the coordinate:
(1)coordvehicle>llane:Nden,lane=llanevehiclethe number of left lane vehicles is increased by 1;
(2)llane<coordvehicle<rlane:Nden,mlane=mlanevehiclethe middle lane is increased by 1;
(3)rlane<coordvehicle:Nden,rlane=rlanevehicle+ +; the right lane vehicle is increased by 1;
wherein N isden,laneThe number of vehicles running on the left lane; n is a radical ofden,mlaneThe number of vehicles running on the middle lane; n is a radical ofden,mlaneThe number of vehicles traveling in the right lane.
6. The method for detecting the road traffic condition based on the vehicle-mounted camera as claimed in claim 1, wherein the step 5 is that: establishing a relation between a three-dimensional real detection environment and a two-dimensional image according to the steps 1, 2, 3 and 4, wherein three coordinate systems are mainly provided, namely an image coordinate system uO3v, with O2Coordinate system of imaging device as origin, world coordinate system XO1Y, worldThe point of the point in the coordinate is proportional to the point of the imaging of the optical axis in the image coordinate, and the pixel point O of the center of the lens of the camera device on the image3With its actual point D in world coordinates, O can be obtained1E, height g of the camera device, and distance O between the world coordinate point corresponding to the image coordinate center and the camera on the y axis1D, image coordinates (u) of center point of lens of vehicle-mounted image pickup device0,v0) Combining the image coordinate E' of the measurement pixel point with the length h of the actual pixelpixWidth w of real pixelpixThe vehicle-mounted camera focal length f is as follows:
Figure FDA0002544391230000041
Figure FDA0002544391230000042
Figure FDA0002544391230000043
O1e is set as the estimated distance range detected, the density of vehicles on the road can be used as an effective index of the traffic condition in the traffic system to describe the traffic condition of the road, and according to the method, the lane-based vehicle counting and coordinate transformation method is adopted, and the detectable distance of the lane i is marked as Dv,iThe statistical result of the vehicles detected and recognized on the lane i is recorded as Nden,iDetermining the density of vehicles on lane i, and recording as Vden,iLane width is denoted as WlaneThe density of vehicles on lane i is expressed as:
Figure FDA0002544391230000044
according to the traffic estimation model in the invention, and introduced to represent traffic conditions, it can be represented as:
Figure FDA0002544391230000045
wherein SplimitRepresenting a limited travel speed on the road;
Figure FDA0002544391230000046
indicating a maximum vehicle density that results in poor traffic conditions.
CN202010556147.4A 2020-06-17 2020-06-17 Road traffic condition detection method based on vehicle-mounted camera shooting Pending CN111915883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010556147.4A CN111915883A (en) 2020-06-17 2020-06-17 Road traffic condition detection method based on vehicle-mounted camera shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010556147.4A CN111915883A (en) 2020-06-17 2020-06-17 Road traffic condition detection method based on vehicle-mounted camera shooting

Publications (1)

Publication Number Publication Date
CN111915883A true CN111915883A (en) 2020-11-10

Family

ID=73237795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010556147.4A Pending CN111915883A (en) 2020-06-17 2020-06-17 Road traffic condition detection method based on vehicle-mounted camera shooting

Country Status (1)

Country Link
CN (1) CN111915883A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313943A (en) * 2021-05-27 2021-08-27 中国科学院合肥物质科学研究院 Road side perception-based intersection traffic real-time scheduling method and system
CN113353071A (en) * 2021-05-28 2021-09-07 云度新能源汽车有限公司 Narrow area intersection vehicle safety auxiliary method and system based on deep learning
CN113657265A (en) * 2021-08-16 2021-11-16 长安大学 Vehicle distance detection method, system, device and medium
CN113744538A (en) * 2021-08-03 2021-12-03 湖南省交通科学研究院有限公司 Highway dynamic overload control method, computer equipment and readable storage medium
CN114758511A (en) * 2022-06-14 2022-07-15 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN114820819A (en) * 2022-05-26 2022-07-29 广东机电职业技术学院 Expressway automatic driving method and system
CN116311903A (en) * 2023-01-28 2023-06-23 深圳市综合交通运行指挥中心 Method for evaluating road running index based on video analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578037A (en) * 2017-09-27 2018-01-12 浙江工商大学 It is a kind of based on the road line detecting method estimated like physical property
CN109887276A (en) * 2019-01-30 2019-06-14 北京同方软件股份有限公司 The night traffic congestion detection method merged based on foreground extraction with deep learning
CN110379156A (en) * 2019-05-10 2019-10-25 四川大学 A kind of accident lane discriminating method based on vehicle-mounted camera fusion GPS flow velocity
CN110853353A (en) * 2019-11-18 2020-02-28 山东大学 Vision-based density traffic vehicle counting and traffic flow calculating method and system
CN111272139A (en) * 2020-02-17 2020-06-12 浙江工业大学 Monocular vision-based vehicle length measuring method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578037A (en) * 2017-09-27 2018-01-12 浙江工商大学 It is a kind of based on the road line detecting method estimated like physical property
CN109887276A (en) * 2019-01-30 2019-06-14 北京同方软件股份有限公司 The night traffic congestion detection method merged based on foreground extraction with deep learning
CN110379156A (en) * 2019-05-10 2019-10-25 四川大学 A kind of accident lane discriminating method based on vehicle-mounted camera fusion GPS flow velocity
CN110853353A (en) * 2019-11-18 2020-02-28 山东大学 Vision-based density traffic vehicle counting and traffic flow calculating method and system
CN111272139A (en) * 2020-02-17 2020-06-12 浙江工业大学 Monocular vision-based vehicle length measuring method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIN JIE 等: "An In-vehicle Camera Based Traffic Estimation in Smart Transportation", 《2019 IEEE 5TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313943A (en) * 2021-05-27 2021-08-27 中国科学院合肥物质科学研究院 Road side perception-based intersection traffic real-time scheduling method and system
CN113313943B (en) * 2021-05-27 2022-07-12 中国科学院合肥物质科学研究院 Road side perception-based intersection traffic real-time scheduling method and system
CN113353071A (en) * 2021-05-28 2021-09-07 云度新能源汽车有限公司 Narrow area intersection vehicle safety auxiliary method and system based on deep learning
CN113353071B (en) * 2021-05-28 2023-12-19 云度新能源汽车有限公司 Narrow area intersection vehicle safety auxiliary method and system based on deep learning
CN113744538A (en) * 2021-08-03 2021-12-03 湖南省交通科学研究院有限公司 Highway dynamic overload control method, computer equipment and readable storage medium
CN113657265A (en) * 2021-08-16 2021-11-16 长安大学 Vehicle distance detection method, system, device and medium
CN113657265B (en) * 2021-08-16 2023-10-10 长安大学 Vehicle distance detection method, system, equipment and medium
CN114820819A (en) * 2022-05-26 2022-07-29 广东机电职业技术学院 Expressway automatic driving method and system
CN114820819B (en) * 2022-05-26 2023-03-31 广东机电职业技术学院 Expressway automatic driving method and system
CN114758511A (en) * 2022-06-14 2022-07-15 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN114758511B (en) * 2022-06-14 2022-11-25 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN116311903A (en) * 2023-01-28 2023-06-23 深圳市综合交通运行指挥中心 Method for evaluating road running index based on video analysis

Similar Documents

Publication Publication Date Title
CN111915883A (en) Road traffic condition detection method based on vehicle-mounted camera shooting
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN112069944B (en) Road congestion level determining method
CN110705458B (en) Boundary detection method and device
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
CN105184872B (en) Automobile insurance electronic price computing device based on machine vision
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN103164958B (en) Method and system for vehicle monitoring
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN115331191B (en) Vehicle type recognition method, device, system and storage medium
CN113468911B (en) Vehicle-mounted red light running detection method and device, electronic equipment and storage medium
JP7003972B2 (en) Distance estimation device, distance estimation method and computer program for distance estimation
CN115240471B (en) Intelligent factory collision avoidance early warning method and system based on image acquisition
CN112634354B (en) Road side sensor-based networking automatic driving risk assessment method and device
CN115965636A (en) Vehicle side view generating method and device and terminal equipment
KR102418344B1 (en) Traffic information analysis apparatus and method
CN112329724B (en) Real-time detection and snapshot method for lane change of motor vehicle
CN111539279B (en) Road height limit detection method, device, equipment and storage medium
Charouh et al. Headway and Following Distance Estimation using a Monocular Camera and Deep Learning.
Lu et al. A vision-based system for the prevention of car collisions at night
CN112070839A (en) Method and equipment for positioning and ranging rear vehicle transversely and longitudinally
CN114078212A (en) Accurate vehicle type identification method and device based on ETC portal
CN112990117A (en) Installation data processing method and device based on intelligent driving system
Rosebrock et al. Real-time vehicle detection with a single camera using shadow segmentation and temporal verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201110

WD01 Invention patent application deemed withdrawn after publication