CN111275960A - Traffic road condition analysis method, system and camera - Google Patents

Traffic road condition analysis method, system and camera Download PDF

Info

Publication number
CN111275960A
CN111275960A CN201811478666.2A CN201811478666A CN111275960A CN 111275960 A CN111275960 A CN 111275960A CN 201811478666 A CN201811478666 A CN 201811478666A CN 111275960 A CN111275960 A CN 111275960A
Authority
CN
China
Prior art keywords
road section
lane
traffic
vehicle
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811478666.2A
Other languages
Chinese (zh)
Inventor
金海善
李勇
赵俊钰
斯瑜彬
张爱民
王启东
马立虎
张朴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201811478666.2A priority Critical patent/CN111275960A/en
Publication of CN111275960A publication Critical patent/CN111275960A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation

Landscapes

  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a traffic road condition analysis method, a system and a camera, wherein the camera obtains a plurality of frames of videos acquired in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, converts the physical positions of the vehicle targets under a world coordinate system at the moment of acquiring each frame of video according to the relative positions of the vehicle targets in each frame of video aiming at each vehicle target, determines the current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, and analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. By the scheme, the real-time analysis of the traffic road condition can be realized.

Description

Traffic road condition analysis method, system and camera
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a traffic road condition analysis method, a traffic road condition analysis system and a camera.
Background
In the traditional analysis of road traffic conditions, position information of a vehicle target is collected and analyzed through position sensing hardware such as a ground induction coil, microwaves and magnetic induction, and data such as driving speed and queuing length are converted based on the detected position information.
However, in the above traffic road condition analysis method, when the position sensing hardware detects the position information of a single vehicle target, the complete road condition cannot be analyzed, and the complete traffic road condition can be analyzed only after the position information of a plurality of vehicle targets is continuously collected. This results in poor real-time traffic condition analysis.
Disclosure of Invention
The embodiment of the invention aims to provide a traffic road condition analysis method, a traffic road condition analysis system and a camera so as to realize real-time analysis of traffic road conditions. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a traffic condition analysis method, which is applied to a camera, and the method includes:
acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Optionally, for each vehicle target, converting a physical position of the vehicle target in the world coordinate system at the time of acquiring each frame of video according to the relative position of the vehicle target in each frame of video, includes:
for each vehicle target in each frame video, determining the relative positions of at least three reference targets which are closest to the vehicle target in the frame video according to the relative position of the vehicle target in the frame video;
searching the physical positions of the at least three reference targets in a world coordinate system according to the relative positions of the at least three reference targets and the pre-stored corresponding relationship between the relative positions and the physical positions of the reference targets;
establishing a transformation matrix of a video coordinate system and the world coordinate system according to the relative positions and the physical positions of the at least three reference targets;
and converting the physical position of the vehicle target in the world coordinate system according to the relative position of the vehicle target in the frame of video and the transformation matrix.
Optionally, for each vehicle target, converting a physical position of the vehicle target in the world coordinate system at the time of acquiring each frame of video according to the relative position of the vehicle target in each frame of video, includes:
aiming at each vehicle target in each frame of video, determining a target calibration area to which the vehicle target belongs according to the relative position of the vehicle target in the frame of video and each calibration area divided in advance based on the relative position of each reference target;
and converting the physical position of the vehicle target in a world coordinate system according to the relative position of the vehicle target in the frame of video and the homography matrix corresponding to the target calibration area acquired in advance.
Optionally, the method further includes:
acquiring the equipment parameters of the camera;
determining a high-precision map matched with the camera according to the equipment parameters;
identifying a target located at a specified relative position in the video, and acquiring a physical position of the target in the high-precision map;
and determining the relative position of each reference target in the video according to the position relation between the target and the peripheral reference target.
Optionally, for each vehicle target, converting a physical position of the vehicle target in the world coordinate system at the time of acquiring each frame of video according to the relative position of the vehicle target in each frame of video, includes:
acquiring equipment parameters of the camera, wherein the equipment parameters comprise a field angle, an erection height value and longitude and latitude;
for each vehicle target in each frame of video, determining a PT coordinate of the camera when the camera is over against the vehicle target according to the relative position of the vehicle target in the frame of video and the field angle, and taking the PT coordinate as a first P coordinate and a first T coordinate;
acquiring a P coordinate of the camera when the camera points to a specified direction, and taking the P coordinate as a second P coordinate;
calculating the difference between the first P coordinate and the second P coordinate to be used as the horizontal included angle between the vehicle target and the designated direction;
calculating the product of the tangent value of the first T coordinate and the erection height value as the horizontal distance between the vehicle target and the camera;
according to the horizontal included angle and the horizontal distance, calculating the longitude and latitude distance between the vehicle target and the camera through a trigonometric function;
and calculating the physical position of the vehicle target under a world coordinate system according to the longitude and latitude of the camera and the longitude and latitude distance.
Optionally, the analyzing the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section includes:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information includes an average speed of the current road section; the preset traffic flow information comprises a preset vehicle speed;
the determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments comprises the following steps:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section;
determining a road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section, wherein the determining comprises the following steps:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
Optionally, after analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section, the method further includes:
displaying the traffic road condition of each lane in the specified road section on a high-precision map according to the traffic road condition grade of each lane in the specified road section;
alternatively, the first and second electrodes may be,
and sending the traffic road condition grade of each lane in the appointed road section to a platform server so that the platform server displays the traffic road condition of each lane in the appointed road section on a high-precision map according to the traffic road condition grade of each lane in the appointed road section.
In a second aspect, an embodiment of the present invention provides a traffic condition analysis method, which is applied to a camera, and the method includes:
acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and sending the current traffic information of each lane in the specified road section to a platform server so that the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
In a third aspect, an embodiment of the present invention provides a traffic road condition analysis method, which is applied to a platform server, and the method includes:
receiving current traffic information of each lane in an appointed road section, which is sent by a camera, wherein the current traffic information is determined by the camera according to the physical positions of each vehicle target at different moments in a multi-frame video acquired in real time;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Optionally, after analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section, the method further includes:
and displaying the traffic road conditions of all lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
In a fourth aspect, an embodiment of the present invention provides a traffic condition analysis method, where the method includes:
acquiring a multi-frame video acquired by a camera in real time;
identifying relative positions of a plurality of vehicle targets which travel on a specified road section in each frame of video;
aiming at each vehicle target, acquiring the physical position of the vehicle target in a world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Optionally, the current traffic information includes an average speed of the current road section;
the determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments comprises the following steps:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
and calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section.
Optionally, the analyzing the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section includes:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information includes an average speed of the current road section; the preset traffic flow information comprises a preset vehicle speed;
determining a road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section, wherein the determining comprises the following steps:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
Optionally, after analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section, the method further includes:
and displaying the traffic road conditions of all lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
In a fifth aspect, an embodiment of the present invention provides a camera, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, so as to implement the traffic condition analysis method provided in the first aspect of the embodiment of the present invention, or implement the traffic condition analysis method provided in the second aspect of the embodiment of the present invention.
In a sixth aspect, an embodiment of the present invention provides a platform server, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, so as to implement the traffic condition analysis method provided in the third aspect of the embodiment of the present invention, or implement the traffic condition analysis method provided in the fourth aspect of the embodiment of the present invention.
In a seventh aspect, an embodiment of the present invention provides a traffic road condition analysis system, including multiple cameras and a platform server;
the camera is used for acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video; aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video; determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments; sending current traffic information of each lane in the specified road section to a platform server;
the platform server is used for receiving current traffic information of each lane in the specified road section sent by the camera; and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
In an eighth aspect, an embodiment of the present invention provides a traffic accident information collecting system, including multiple cameras and a platform server;
the camera is used for acquiring a video;
the platform server is used for acquiring multi-frame videos acquired by the camera in real time; identifying relative positions of a plurality of vehicle targets which travel on a specified road section in each frame of video; aiming at each vehicle target, acquiring the physical position of the vehicle target in a world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video; determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments; and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
According to the traffic road condition analysis method, the traffic road condition analysis system and the camera, the camera obtains the multi-frame videos acquired in real time and the relative positions of a plurality of vehicle targets running on the specified road section in each frame of video, converts the physical positions of the vehicle targets in the world coordinate system at the moment of acquiring each frame of video according to the relative positions of the vehicle targets in each frame of video aiming at each vehicle target, determines the current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, and analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a traffic road condition analysis method applied to a camera according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a traffic road condition analysis method applied to a camera according to another embodiment of the present invention;
fig. 3 is a schematic flow chart of a traffic road condition analysis method applied to a platform server according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a traffic road condition analysis method of interaction between a camera and a platform server according to an embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating a traffic road condition analysis method applied to a platform server according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of a traffic road condition analysis device applied to a camera according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a traffic road condition analysis device applied to a camera according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of a traffic road condition analysis device applied to a platform server according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a traffic road condition analysis device applied to a platform server according to another embodiment of the present invention;
FIG. 10 is a schematic view of a camera according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a camera according to another embodiment of the present invention;
FIG. 12 is a block diagram of a platform server according to an embodiment of the present invention;
FIG. 13 is a block diagram of a platform server according to another embodiment of the present invention;
fig. 14 is a schematic structural diagram of a traffic accident information collecting system according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a traffic accident information collecting system according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to realize real-time analysis of traffic road conditions, the embodiment of the invention provides a traffic road condition analysis method, a traffic road condition analysis device, a camera, a platform server, a machine readable storage medium and a machine readable storage system.
The terms in the examples of the present invention are explained as follows:
lane level flow rate: the standard lane is generally 3.75 meters, and a certain deviation may occur according to the difference of regions, and is generally between 3.5 meters and 4 meters, so that the lane to which the collected traffic flow belongs can be determined as long as the error can be controlled within 1.75 meters.
Traffic index: the traffic index is a short name of a road traffic operation index (also called road traffic congestion index), and the road traffic operation index is an index which comprehensively reflects the traffic operation condition of a road network. The traffic index value range is 0-10, and 5 traffic road condition grades can be divided. Wherein 0-2 corresponds to 'unblocked', 2-4 corresponds to 'basic unblocked', 4-6 corresponds to 'light congestion', 6-8 corresponds to 'medium congestion', 8-10 corresponds to 'severe congestion', and the higher the numerical value is, the more serious the traffic congestion condition is.
The traffic road condition analysis method provided by the embodiment of the invention can be applied to a camera, the camera has a road condition analysis function, and as shown in fig. 1, the traffic road condition analysis method can comprise the following steps:
s101, acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video.
In the traffic monitoring system, a plurality of fixed cameras are erected above a road, the cameras erected on different road sections are used for collecting the running video of a vehicle in the road section, and the cameras can shoot the running condition of a vehicle target in real time to obtain a multi-frame video. For a pan-tilt camera, a user may specify a segment that the pan-tilt camera needs to monitor. It should be noted that the specified road segment may be the whole road segment monitored by the camera, or may be a part of the road segment monitored by the camera, and the specified road segment may be set according to the actual requirement and the analysis result.
When the traffic road condition analysis needs to be carried out on a certain specified road section, a multi-frame video acquired in real time can be acquired, and the relative position of the same vehicle target in each video frame is tracked by utilizing target tracking algorithms such as deep learning and the like.
And S102, for each vehicle target, converting the physical position of the vehicle target in the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video.
The relative position is the position of the vehicle target in the coordinate system of each frame of video collected by the camera, and the coordinate system of the video collected by the camera has a certain mapping relation with the world coordinate system, so that the physical position of the vehicle target in the world coordinate system can be calculated through coordinate conversion, and the converted physical position actually has time information because each frame of video is collected at different time, namely space-time data, and the physical position can be specifically the GPS position.
Optionally, S102 may specifically be:
for each vehicle target in each frame video, determining the relative positions of at least three reference targets which are closest to the vehicle target in the frame video according to the relative position of the vehicle target in the frame video;
searching the physical positions of the at least three reference targets in a world coordinate system according to the relative positions of the at least three reference targets and the pre-stored corresponding relationship between the relative positions and the physical positions of the reference targets;
establishing a transformation matrix of a video coordinate system and a world coordinate system according to the relative positions and the physical positions of at least three reference targets;
and converting the physical position of the vehicle target in the world coordinate system according to the relative position of the vehicle target in the frame video and the transformation matrix.
The relative positions of at least three reference targets such as mark lines, street lamps and the like around the relative position can be extracted through the relative position of the vehicle target in the video, and the physical positions of the reference targets can be found out because the corresponding relation between the relative position and the physical position of each reference target is stored in advance, so that a transformation matrix of a coordinate system and a world coordinate system of the video can be established based on the physical position and the relative position of the reference targets. Because the transformation matrix is established based on the reference targets near the vehicle target, the physical position of the vehicle target can be obtained by substituting the relative position of the vehicle target into the transformation matrix through coordinate transformation.
Optionally, S102 may specifically be:
aiming at each vehicle target in each frame of video, determining a target calibration area to which the vehicle target belongs according to the relative position of the vehicle target in the frame of video and each calibration area divided in advance based on the relative position of each reference target;
and converting the physical position of the vehicle target in the world coordinate system according to the relative position of the vehicle target in the frame video and the homography matrix corresponding to the target calibration area acquired in advance.
Because a plurality of reference targets exist in the visual range of the camera, the positions of the reference targets can be divided in advance, a plurality of calibration areas can be divided in a video correspondingly, at least three vertexes of each calibration area are relative position points of at least three reference targets with short distances in the video, a target calibration area to which the vehicle target belongs can be determined according to the relative position of the vehicle target, a corresponding homography matrix is preset for each calibration area, the mapping relation between the physical position of each reference target in the calibration area and the relative position in the video is recorded in the homography matrix, and the mapping relation between the position of the vehicle target and the position of each reference target in the target calibration area is the closest to the position mapping relation between the vehicle target and each reference target in the target calibration area, so that the relative position of the vehicle target can be directly substituted into the homography matrix corresponding to the target calibration area, and obtaining the physical position of the vehicle target through conversion.
Optionally, the method provided in the embodiment of the present invention may further implement the following steps:
acquiring the equipment parameters of the camera;
determining a high-precision map matched with the camera according to the equipment parameters;
identifying a target located at a specified relative position in a video, and acquiring a physical position of the target in a high-precision map;
and determining the relative position of each reference target in the video according to the position relation between the target and the peripheral reference target.
In the above methods for converting physical positions, the mapping relationship between the relative position of the reference target in the video and the physical position in the world coordinate system needs to be recorded in advance, such recording requires matching and fusing a high-precision map and a video in advance, and the orientation angle, longitude and latitude value and other device parameters of the camera can be known in advance, for example, can be obtained through calibration or through equipment such as a GPS chip, a gyroscope/electronic compass and the like carried by the camera, determining a high-precision map which can be matched according to the orientation, physical position and other equipment parameters of the camera, determining a target point by the camera through image recognition, e.g., the intersection center point, the physical location of this target point in the high-precision map is also unambiguous, and then determining the relative position of each reference target in the video one by one according to the position relationship between the target point and the peripheral reference targets. The selected reference target can be a road sign line, a street lamp, a well cover, an isolation guardrail and the like, and the precision of the physical position obtained by conversion can meet the high-precision requirement, for example, the precision within the range of 200 meters can be within 1 meter, so that the lane-level application can be met.
Optionally, S102 may specifically be:
acquiring equipment parameters of the camera, wherein the equipment parameters comprise a field angle, an erection height value and longitude and latitude;
for each vehicle target in each frame of video, determining a PT coordinate of the camera when the camera is over against the vehicle target according to the relative position of the vehicle target in the frame of video and the field angle, and taking the PT coordinate as a first P coordinate and a first T coordinate;
acquiring a P coordinate of the camera when the camera points to the designated direction as a second P coordinate;
calculating the difference between the first P coordinate and the second P coordinate to be used as the horizontal included angle between the vehicle target and the designated direction;
calculating the product of the tangent value of the first T coordinate and the erection height value as the horizontal distance between the vehicle target and the camera;
calculating the longitude and latitude distance between the vehicle target and the camera through a trigonometric function according to the horizontal included angle and the horizontal distance;
and calculating the physical position of the vehicle target under the world coordinate system according to the longitude and latitude and the longitude and latitude distance of the camera.
The PT coordinates when the camera photographs the vehicle target may be read first, and then the read PT coordinates may be converted into PT coordinates when the camera photographs the vehicle target as the first P coordinate and the first T coordinate according to a relative position of the vehicle target in the camera photographed video and a field angle when the camera photographs the vehicle target. Assuming that the relative position of the vehicle object in the camera-captured video is (X, Y), the first P coordinate and the first T coordinate can be obtained by the following equation:
Pan_tar=Pan_cur+arctan((2*X/L1-1)*tan(θ1/2));
Tilt_tar=Tilt_cur+arctan((2*Y/L2-1)*tan(θ2/2));
wherein Pan _ tar represents a first P coordinate, Tilt _ tar represents a first T coordinate, Pan _ cur represents a horizontal direction angle of the current camera in the PT coordinate system, Tilt _ cur represents a vertical direction angle of the current camera in the PT coordinate system, (Pan _ cur, Tilt _ cur) corresponds to a center position of the current video, L _ cur represents a center position of the current video, and1representing the total number of pixels in the horizontal direction of the video, L2Representing the total number of pixels, theta, in the vertical direction of the video1Expressed as the horizontal field angle, theta, corresponding to the current video2Representing the vertical field angle corresponding to the current video; the XY coordinate system uses the top left corner of the video as the origin and pixel as the unit.
The coordinates of the camera P when the camera points in the north, south, east, west, and the like directions can be acquired through an electronic compass of the camera, and are referred to as second P coordinates for descriptive purposes. The difference between the first P coordinate and the second P coordinate is the horizontal included angle between the vehicle target and the designated direction.
The horizontal distance between the vehicle target and the camera can be calculated according to the value of tanT h, wherein h represents the erection height value of the camera, and L represents the horizontal distance between the vehicle target and the camera. The horizontal distance is the distance between the camera and the vehicle object assuming that the camera and the vehicle object are at the same height.
Assuming that the designated direction is the north direction, the direction may be defined by L sin θ Llon,L*cosθ=LlatAnd calculating the longitude and latitude distance between the vehicle target and the camera, wherein L represents the horizontal distance between the vehicle target and the camera, theta represents the horizontal included angle between the vehicle target and the due north direction, and LlonIndicating the longitudinal distance, L, of the vehicle object from the cameralatFurther, assuming that the designated direction is the east-ward direction, it may be represented by L sin α ═ Llon,L*cosα=LlatCalculating the longitude and latitude distance between the vehicle target and the camera, wherein L represents the horizontal distance between the vehicle target and the camera, α represents the horizontal angle between the vehicle target and the east direction, and L represents the horizontal angle between the vehicle target and the east directionlonIndicating the longitudinal distance, L, of the vehicle object from the cameralatIndicating the latitudinal distance of the vehicle target from the camera. For the specified directions of true south and true west, the specific calculation processes are similar and are not described herein again.
The camera is usually provided with a GPS positioning device, and the longitude and latitude of the camera can be obtained based on the GPS positioning device, so that the longitude and latitude of the camera and the longitude and latitude distance between the camera and the vehicle target are obtained, the longitude and latitude of the vehicle target can be obtained through calculation, and the physical position of the vehicle target under a world coordinate system is also obtained.
S103, determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments.
For each vehicle target, the physical positions at different times are integrated to obtain the driving conditions of the vehicle target, such as the driving speed, the driving direction, the time period appearing on the specified road section, the lane on which the vehicle target drives, and the like of the vehicle target, and the collected driving conditions of all vehicles on the specified road section are integrated to obtain the current traffic information of each lane in the specified road section, the average speed of the vehicle, the queuing length of the vehicle, and the like.
Optionally, the current traffic information may include an average speed of the current road section, and the preset traffic information may include a preset speed.
Correspondingly, S303 may specifically be:
aiming at each vehicle target, calculating the average speed of the vehicle target on a specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under a world coordinate system corresponding to the time of at least two frames of videos;
and calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section.
Based on the steps, the real-time physical position of each vehicle target can be obtained in real time (the accuracy can be in the ms level, the video is collected by the camera at the speed of N frames per second, if N is 25 frames, the minimum granularity is 1S/25 ═ 0.04S), the physical positions S1 and S2 of the vehicle target at the corresponding time of any two frames of videos can be converted through coordinate conversion under the assumption that the video is collected by the camera at 25 frames per second, the corresponding time of the two frames of videos is T1 and T2 respectively, the average vehicle speed of the vehicle target calculated by the two frames of videos is V | S1-S2|/| T1-T2|, if the average vehicle speed of the vehicle target calculated by using more than two frames of videos can be more accurate, namely, an average vehicle speed is calculated between each two frames, and then the average is calculated as a whole. Thus, the average speed of each vehicle object running in the specified road section can be obtained, and the average speed of the current road section of each lane in the specified road section can be calculated by averaging the average speeds of all the vehicle objects on each lane.
And S104, analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Because the current traffic information contains information such as traffic flow, average vehicle speed, vehicle queuing length and the like, the information can visually represent the road conditions of all lanes in the specified road section, for example, the traffic flow of the straight lanes in the specified road section is very large and reaches 1000 vehicles per hour, the average vehicle speed is very small and is only 20km/h, and the traffic road condition grade of the straight lanes in the specified road section can be determined to be moderate congestion; for another example, if the queuing length of the left-turn lane in the specified road segment is very long and exceeds 2km, it indicates that the traffic condition of the left-turn lane in the specified road segment is particularly poor, and it can be determined that the traffic condition level of the left-turn lane in the specified road segment may be severe congestion.
In summary, the traffic flow passing conditions of each lane in the specified road section can be analyzed by using one or more pieces of information in the current traffic flow information, and then the traffic road condition grade of each lane in the specified road section is determined.
Optionally, S104 may specifically be
Acquiring preset traffic information of a specified road section;
determining a road traffic jam index of each lane in the specified road section according to preset traffic flow information and current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
For a specified road segment, the specified road segment may be preset with preset traffic flow information, and the preset traffic flow information may be a threshold value for limiting road traffic congestion, for example, the average speed of the vehicle is lower than 10km/h, and the vehicle is determined to be severely congested; the average speed of the vehicle is between 10km/h and 20km/h, and the vehicle is determined to be moderately congested; the average speed of the vehicle is between 20km/h and 30km/h, and the vehicle is determined to be lightly congested; the average speed of the vehicle is between 30km/h and 50km/h, and the vehicle is determined as basically smooth; the average speed of the vehicle exceeds 50km/h and is determined to be smooth. As another example, a queue length of no more than 300 meters is considered clear; the queuing length is 300-500 m, and is considered as basically smooth; the queuing length is 500-700 m, and the queuing is determined as light congestion; the queuing length is 700-1000 m, and the medium congestion is determined; the queue length is over 1000 meters and is considered to be heavily congested.
Based on the above example, after the current traffic information is obtained, the current traffic information may be compared with preset traffic information, and a road traffic congestion index is correspondingly allocated, so that whether the designated road section is severely congested, moderately congested, slightly congested, basically unobstructed or unobstructed may be determined according to a corresponding relationship between the prestored road traffic congestion index and the traffic road condition level, where the prestored corresponding relationship may be set based on an industry universal standard or may be defined by a city management department according to a requirement.
Optionally, the current traffic information may include an average speed of the current road section; the preset traffic flow information may include a preset vehicle speed;
correspondingly, the step of determining the road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section may specifically be:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
The road traffic congestion index may be defined as a link travel time ratio, which is a ratio of an actual travel time to an expected travel time through a specified link, and is equal to a ratio of a preset vehicle speed to an average vehicle speed of an actual current link. The calculation formula of the road traffic congestion index is as follows:
Figure BDA0001892833110000161
wherein R isTIs the index of the road traffic jam,
Figure BDA0001892833110000162
in order to set the speed of the vehicle in advance,
Figure BDA0001892833110000163
the average speed of the current road section of a certain lane is obtained. Since the corresponding relationship between the road traffic congestion index and the traffic condition level is pre-stored, as shown in table 1, R can be directly determinedTAnd satisfying which range, thereby determining the traffic road condition grade of each lane in the specified road section.
TABLE 1
Grade of operation Is unblocked Is basically unblocked Light congestion Moderate congestion Severe congestion
Express way RT≤1.36 1.36<RT≤1.88 1.88<RT≤2.5 2.5<RT≤3.75 RT>3.75
Main road RT≤1.25 1.25<RT≤1.67 1.67<RT≤2.5 2.5<RT≤3.33 RT>3.33
Secondary trunk road RT≤1.33 1.33<RT≤2 2<RT≤2.67 3.67<RT≤4 RT>4
Branch circuit RT≤1.16 1.16<RT≤1.75 1.75<RT≤2.33 2.33<RT≤3.5 RT>3.5
Optionally, after executing S305, the embodiment of the present invention may further execute the following steps:
displaying the traffic road condition of each lane in the appointed road section on a high-precision map according to the traffic road condition grade of each lane in the appointed road section;
alternatively, the first and second electrodes may be,
and sending the traffic road condition grade of each lane in the appointed road section to the platform server, so that the platform server displays the traffic road condition of each lane in the appointed road section on a high-precision map according to the traffic road condition grade of each lane in the appointed road section.
The traffic road condition grade is obtained through the steps, the traffic road condition of each lane in the appointed road section can be displayed on a high-precision map (such as a GIS (Geographic Information System) map) according to the traffic road condition grade of each lane in the appointed road section, and real-time and accurate road traffic road condition Information based on video analysis is obtained. The mode of superimposing the traffic road condition grade on the high-precision map can be that the camera directly superimposes the traffic road condition grade on the high-precision map, or the camera sends the traffic road condition grade to the platform server, and the platform server superimposes the traffic road condition grade on the high-precision map.
By applying the embodiment, the camera obtains the multi-frame video acquired in real time and the relative positions of the plurality of vehicle targets running on the specified road section in each frame of video, converts the physical positions of the vehicle targets under the world coordinate system at the moment of acquiring each frame of video according to the relative positions of the vehicle targets in each frame of video aiming at each vehicle target, determines the current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, and analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
The traffic road condition analysis method provided by the embodiment of the invention can be implemented in the camera, and can also be implemented by performing physical position conversion and current traffic information determination on the camera, sending the determined current traffic information to the platform server, and performing road condition analysis on the platform server.
Correspondingly, the traffic road condition analysis method provided by the embodiment of the invention can be applied to a camera, and as shown in fig. 2, the traffic road condition analysis method can include the following steps:
s201, acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video.
And S202, for each vehicle target, converting the physical position of the vehicle target in the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video.
And S203, determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments.
And S204, sending the current traffic information of each lane in the specified road section to the platform server, so that the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
The traffic road condition analysis method provided by the embodiment of the invention can be applied to a platform server, and as shown in fig. 3, the method can comprise the following steps:
s301, receiving current traffic information of each lane in the specified road section, wherein the current traffic information is obtained by the camera according to the physical positions of each vehicle target in the multi-frame video collected in real time at different moments.
S302, analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
The method comprises the steps that a camera acquires a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, for each vehicle target, the physical position of the vehicle target under a world coordinate system at the moment of collecting each frame of video is converted according to the relative position of the vehicle target in each frame of video, the current traffic information of each lane in the specified road section is determined according to the physical position of each vehicle target at different moments, the current traffic information of each lane in the specified road section is sent to a platform server, and the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
For easier understanding, the traffic condition analysis method provided by the embodiment of the present invention is described below from the perspective of interaction between the camera and the platform server. The interaction flow chart of the camera and the platform server is shown in fig. 4, and comprises the following steps.
S401, the camera acquires a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video.
The camera can collect multi-frame videos in real time, can identify vehicle targets in the collection process, and can correspondingly identify the relative positions of a plurality of vehicle targets running in the specified road section. Specifically, see S101 in the embodiment shown in fig. 1, which is not described herein again.
S402, aiming at each vehicle target, the camera converts the physical position of the vehicle target in the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video.
The relative position is the position of the vehicle target in the coordinate system of each frame of video collected by the camera, and the coordinate system of the video collected by the camera has a certain mapping relation with the world coordinate system, so that the physical position of the vehicle target in the world coordinate system can be calculated through coordinate conversion, and the converted physical position actually has time information because each frame of video is collected at different time, namely space-time data, and the physical position can be specifically the GPS position. The specific manner of scaling the physical location is shown in the embodiment shown in fig. 1, and will not be described herein again.
And S403, determining the current traffic information of each lane in the specified road section by the camera according to the physical positions of each vehicle target at different moments.
For each vehicle target, the physical positions at different times are integrated to obtain the driving conditions of the vehicle target, such as the driving speed, the driving direction, the time period appearing on the specified road section, the lane on which the vehicle target drives, and the like of the vehicle target, and the collected driving conditions of all vehicles on the specified road section are integrated to obtain the current traffic information of each lane in the specified road section, the average speed of the vehicle, the queuing length of the vehicle, and the like. The specific way of calculating the current traffic information is detailed in the embodiment shown in fig. 1, and is not described here again.
S404, the camera sends current traffic information of each lane in the specified road section to the platform server.
After the camera determines the current traffic information of each lane in the specified road section, the current traffic information can be sent to the platform server, and the platform server analyzes the traffic road condition.
S405, the platform server analyzes and obtains the traffic road condition grade of each lane in the appointed road section according to the current traffic flow information of each lane in the appointed road section.
The difference from the embodiment shown in fig. 1 is that, in this embodiment, the platform server analyzes the traffic road condition level, specifically, the traffic road condition level is obtained according to the current traffic information of each lane in the specified road section sent by the camera, and the analysis mode is basically the same as the analysis mode of the camera in the embodiment shown in fig. 1, and is not described here again.
Optionally, after executing S405, the method provided in the embodiment of the present invention may further implement the following steps:
and displaying the traffic road conditions of all lanes in the appointed road section on the high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
After the platform server analyzes and obtains the traffic road condition grades of all lanes in the appointed road section, the traffic road condition grades can be directly superposed on a high-precision map for displaying, and real-time and accurate road traffic road condition information based on video analysis is obtained.
By applying the embodiment, the camera acquires multi-frame videos acquired in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame video, converts physical positions of the vehicle targets under a world coordinate system at the moment of acquiring each frame video according to the relative positions of the vehicle targets in each frame video aiming at each vehicle target, determines current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, sends the current traffic flow information of each lane in the specified road section to the platform server, and the platform server analyzes and obtains the traffic grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
The traffic road condition analysis method provided by the embodiment of the invention can be implemented on the platform server, the camera is only used for collecting the video, the collected video is sent to the platform server, and the platform server carries out relative position identification, physical position conversion and road condition grade analysis.
The embodiment of the invention also provides a traffic road condition analysis method, as shown in fig. 5, the method is applied to a platform server, and comprises the following steps:
s501, acquiring multi-frame videos acquired by the camera in real time.
The camera can collect video frames in real time and send the collected video frames to the platform server.
S502, identifying the relative positions of a plurality of vehicle objects which travel on the specified road section in each frame of video.
The platform server has an object identification function and can identify the relative position of the vehicle object in the video.
S503, aiming at each vehicle target, according to the relative position of the vehicle target in each frame of video, acquiring the physical position of the vehicle target in the world coordinate system at the moment of acquiring each frame of video.
After the platform server identifies the relative position of the vehicle target, the physical position of the vehicle target can be converted based on the stored conversion relation, the relative position of the vehicle target can be sent to the camera, the physical position of the vehicle target is converted by the camera, and then the physical position of the vehicle target is fed back to the platform server. The specific conversion process is shown in the embodiment shown in fig. 3, and is not described herein again.
S504, determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments.
The platform server can send an acquisition instruction to the corresponding camera according to the physical position of the vehicle target, so that the camera sends current traffic information of each lane in the specified road section to the platform server. Alternatively, the platform server may be provided with a function of analyzing current traffic information.
Optionally, the current traffic information may include an average speed of the current road segment.
Then, S504 may specifically be:
aiming at each vehicle target, calculating the average speed of the vehicle target on a specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under a world coordinate system corresponding to the time of at least two frames of videos;
and calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section.
Based on the steps, the real-time physical position of each vehicle target can be obtained in real time (accurate to ms level, the minimum granularity is 1S/25 ═ 0.04S), assuming that the camera collects 25 frames of videos per second, the physical positions S1 and S2 of the vehicle target at the corresponding time of any two frames of videos can be converted through coordinate conversion, the corresponding time of the two frames of videos is T1 and T2 respectively, the average vehicle speed of the vehicle target calculated by the two frames of videos is V ═ S1-S2|/| T1-T2|, if the average vehicle speed of the vehicle target calculated by using more than two frames of videos can be more accurate, namely, an average vehicle speed is calculated between each two frames, and then the average is calculated as a whole. Thus, the average speed of each vehicle object running in the specified road section can be obtained, and the average speed of the current road section of each lane in the specified road section can be calculated by averaging the average speeds of all the vehicle objects on each lane.
And S505, analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
The current traffic flow information of each lane represents the road condition of the lane, so the traffic road condition grade of each lane in the appointed road section can be obtained through analysis.
Optionally, S505 may specifically be:
acquiring preset traffic information of a specified road section;
determining a road traffic jam index of each lane in the specified road section according to preset traffic flow information and current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information may include an average speed of the current road section; the preset traffic flow information may include a preset vehicle speed;
the step of determining the road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section may specifically be:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
This embodiment is the same as the corresponding embodiment in fig. 1 and will not be described again here.
Optionally, after executing S505, the method provided in the embodiment of the present invention may further execute the following steps:
and displaying the traffic road conditions of all lanes in the appointed road section on the high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
After the traffic road condition grade is obtained through the steps, the traffic road condition grade can be displayed on a high-precision map, and real-time and accurate road traffic road condition information based on video analysis is obtained.
By applying the embodiment, the camera acquires multi-frame videos acquired in real time, the multi-frame videos are sent to the platform server, the platform server can identify the relative positions of a plurality of vehicle targets running on the specified road section in each frame of video, for each vehicle target, the physical positions of the vehicle targets under the world coordinate system at the moment of acquiring each frame of video are converted according to the relative positions of the vehicle targets in each frame of video, the current traffic flow information of each lane in the specified road section is determined according to the physical positions of the vehicle targets at different moments, and the traffic road condition grade of each lane in the specified road section is obtained through analysis according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
Corresponding to the above method embodiment, an embodiment of the present invention provides a traffic condition analysis device, as shown in fig. 6, applied to a camera, where the device may include:
the acquiring module 610 is used for acquiring multiple frames of videos acquired in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
the conversion module 620 is used for converting the physical position of each vehicle target in the world coordinate system at the moment of acquiring each frame of video according to the relative position of each vehicle target in each frame of video;
a determining module 630, configured to determine current traffic information of each lane in the specified road segment according to physical positions of each vehicle target at different times;
and the analysis module 640 is configured to analyze the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
Optionally, the scaling module 620 may be specifically configured to:
for each vehicle target in each frame video, determining the relative positions of at least three reference targets which are closest to the vehicle target in the frame video according to the relative position of the vehicle target in the frame video;
searching the physical positions of the at least three reference targets in a world coordinate system according to the relative positions of the at least three reference targets and the pre-stored corresponding relationship between the relative positions and the physical positions of the reference targets;
establishing a transformation matrix of a video coordinate system and the world coordinate system according to the relative positions and the physical positions of the at least three reference targets;
and converting the physical position of the vehicle target in the world coordinate system according to the relative position of the vehicle target in the frame of video and the transformation matrix.
Optionally, the scaling module 620 may be specifically configured to:
aiming at each vehicle target in each frame of video, determining a target calibration area to which the vehicle target belongs according to the relative position of the vehicle target in the frame of video and each calibration area divided in advance based on the relative position of each reference target;
and converting the physical position of the vehicle target in a world coordinate system according to the relative position of the vehicle target in the frame of video and the homography matrix corresponding to the target calibration area acquired in advance.
Optionally, the scaling module 620 may be further configured to:
acquiring the equipment parameters of the camera;
determining a high-precision map matched with the camera according to the equipment parameters;
identifying a target located at a specified relative position in the video, and acquiring a physical position of the target in the high-precision map;
and determining the relative position of each reference target in the video according to the position relation between the target and the peripheral reference target.
Optionally, the scaling module 620 may be specifically configured to:
acquiring equipment parameters of the camera, wherein the equipment parameters comprise a field angle, an erection height value and longitude and latitude;
for each vehicle target in each frame of video, determining a PT coordinate of the camera when the camera is over against the vehicle target according to the relative position of the vehicle target in the frame of video and the field angle, and taking the PT coordinate as a first P coordinate and a first T coordinate;
acquiring a P coordinate of the camera when the camera points to a specified direction, and taking the P coordinate as a second P coordinate;
calculating the difference between the first P coordinate and the second P coordinate to be used as the horizontal included angle between the vehicle target and the designated direction;
calculating the product of the tangent value of the first T coordinate and the erection height value as the horizontal distance between the vehicle target and the camera;
according to the horizontal included angle and the horizontal distance, calculating the longitude and latitude distance between the vehicle target and the camera through a trigonometric function;
and calculating the physical position of the vehicle target under a world coordinate system according to the longitude and latitude of the camera and the longitude and latitude distance.
Optionally, the analysis module 640 may be specifically configured to:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information may include a current road segment average vehicle speed; the preset traffic information may include a preset vehicle speed;
the determining module 630 may be specifically configured to:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section;
the analysis module 640 may be specifically configured to:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
Optionally, the apparatus may further include:
the display module is used for displaying the traffic road condition of each lane in the specified road section on a high-precision map according to the traffic road condition grade of each lane in the specified road section; or the traffic road condition grade of each lane in the specified road section is sent to a platform server, so that the platform server displays the traffic road condition of each lane in the specified road section on a high-precision map according to the traffic road condition grade of each lane in the specified road section.
By applying the embodiment, the camera obtains the multi-frame video acquired in real time and the relative positions of the plurality of vehicle targets running on the specified road section in each frame of video, converts the physical positions of the vehicle targets under the world coordinate system at the moment of acquiring each frame of video according to the relative positions of the vehicle targets in each frame of video aiming at each vehicle target, determines the current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, and analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
An embodiment of the present invention provides a traffic condition analysis device, as shown in fig. 7, applied to a camera, where the device may include:
the acquisition module 710 is configured to acquire multiple frames of videos acquired in real time and relative positions of multiple vehicle targets running on a specified road section in each frame of video;
the conversion module 720 is used for converting the physical position of each vehicle target in the world coordinate system at the moment of acquiring each frame of video according to the relative position of each vehicle target in each frame of video;
the determining module 730 is configured to determine current traffic information of each lane in the specified road segment according to physical positions of each vehicle target at different times;
the sending module 740 is configured to send the current traffic information of each lane in the specified road segment to a platform server, so that the platform server obtains the traffic road condition level of each lane in the specified road segment through analysis according to the current traffic information of each lane in the specified road segment.
An embodiment of the present invention further provides a traffic road condition analysis device, as shown in fig. 8, which is applied to a platform server, and the device may include:
the receiving module 810 is configured to receive current traffic information of each lane in the specified road section, where the current traffic information is determined by the camera according to physical positions of each vehicle target at different times in the multi-frame video acquired in real time;
and the analysis module 820 is configured to analyze the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
Optionally, the apparatus may further include:
and the display module is used for displaying the traffic road conditions of all the lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all the lanes in the appointed road section.
By applying the embodiment, the camera acquires multi-frame videos acquired in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame video, converts physical positions of the vehicle targets under a world coordinate system at the moment of acquiring each frame video according to the relative positions of the vehicle targets in each frame video aiming at each vehicle target, determines current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, sends the current traffic flow information of each lane in the specified road section to the platform server, and the platform server analyzes and obtains the traffic grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
An embodiment of the present invention further provides a traffic road condition analysis device, as shown in fig. 9, the device may include:
an obtaining module 910, configured to obtain a multi-frame video collected by a camera in real time;
the identifying module 920 is used for identifying the relative positions of a plurality of vehicle targets which travel on the specified road section in each frame of video;
the obtaining module 910 is further configured to, for each vehicle target, obtain, according to a relative position of the vehicle target in each frame of video, a physical position of the vehicle target in a world coordinate system at a time when each frame of video is collected;
a determining module 930, configured to determine current traffic information of each lane in the specified road segment according to physical positions of each vehicle target at different times;
and the analysis module 940 is configured to analyze the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
Optionally, the current traffic information may include a current road segment average vehicle speed;
the determining module 930 may be specifically configured to:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
and calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section.
Optionally, the analysis module 940 may be specifically configured to:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information may include a current road segment average vehicle speed; the preset traffic information may include a preset vehicle speed;
the analysis module 940 may be specifically configured to:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
Optionally, the apparatus may further include:
and the display module is used for displaying the traffic road conditions of all the lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all the lanes in the appointed road section.
By applying the embodiment, the camera acquires multi-frame videos acquired in real time, the multi-frame videos are sent to the platform server, the platform server can identify the relative positions of a plurality of vehicle targets running on the specified road section in each frame of video, for each vehicle target, the physical positions of the vehicle targets under the world coordinate system at the moment of acquiring each frame of video are converted according to the relative positions of the vehicle targets in each frame of video, the current traffic flow information of each lane in the specified road section is determined according to the physical positions of the vehicle targets at different moments, and the traffic road condition grade of each lane in the specified road section is obtained through analysis according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
An embodiment of the present invention provides a medium camera, as shown in fig. 10, including a processor 1001 and a memory 1002;
the memory 1002 is used for storing computer programs;
the processor 1001 is configured to execute the computer program stored in the memory 1002, and implement the following steps:
acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Optionally, when the processor 1001 performs the step of calculating, according to the relative position of the vehicle target in each frame of video, the physical position of the vehicle target in the world coordinate system at the time of acquiring each frame of video, the following steps may be specifically implemented:
for each vehicle target in each frame video, determining the relative positions of at least three reference targets which are closest to the vehicle target in the frame video according to the relative position of the vehicle target in the frame video;
searching the physical positions of the at least three reference targets in a world coordinate system according to the relative positions of the at least three reference targets and the pre-stored corresponding relationship between the relative positions and the physical positions of the reference targets;
establishing a transformation matrix of a video coordinate system and the world coordinate system according to the relative positions and the physical positions of the at least three reference targets;
and converting the physical position of the vehicle target in the world coordinate system according to the relative position of the vehicle target in the frame of video and the transformation matrix.
Optionally, when the processor 1001 performs the step of calculating, according to the relative position of the vehicle target in each frame of video, the physical position of the vehicle target in the world coordinate system at the time of acquiring each frame of video, the following steps may be specifically implemented:
aiming at each vehicle target in each frame of video, determining a target calibration area to which the vehicle target belongs according to the relative position of the vehicle target in the frame of video and each calibration area divided in advance based on the relative position of each reference target;
and converting the physical position of the vehicle target in a world coordinate system according to the relative position of the vehicle target in the frame of video and the homography matrix corresponding to the target calibration area acquired in advance.
Optionally, the processor 1001 may be further configured to implement the following steps:
acquiring the equipment parameters of the camera;
determining a high-precision map matched with the camera according to the equipment parameters;
identifying a target located at a specified relative position in the video, and acquiring a physical position of the target in the high-precision map;
and determining the relative position of each reference target in the video according to the position relation between the target and the peripheral reference target.
Optionally, when the processor 1001 performs the step of calculating, according to the relative position of the vehicle target in each frame of video, the physical position of the vehicle target in the world coordinate system at the time of acquiring each frame of video, the following steps may be specifically implemented:
acquiring equipment parameters of the camera, wherein the equipment parameters comprise a field angle, an erection height value and longitude and latitude;
for each vehicle target in each frame of video, determining a PT coordinate of the camera when the camera is over against the vehicle target according to the relative position of the vehicle target in the frame of video and the field angle, and taking the PT coordinate as a first P coordinate and a first T coordinate;
acquiring a P coordinate of the camera when the camera points to a specified direction, and taking the P coordinate as a second P coordinate;
calculating the difference between the first P coordinate and the second P coordinate to be used as the horizontal included angle between the vehicle target and the designated direction;
calculating the product of the tangent value of the first T coordinate and the erection height value as the horizontal distance between the vehicle target and the camera;
according to the horizontal included angle and the horizontal distance, calculating the longitude and latitude distance between the vehicle target and the camera through a trigonometric function;
and calculating the physical position of the vehicle target under a world coordinate system according to the longitude and latitude of the camera and the longitude and latitude distance.
Optionally, when the step of analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section is implemented by the processor 1001, the following steps may be specifically implemented:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information may include a current road segment average vehicle speed; the preset traffic information may include a preset vehicle speed;
when the processor 1001 implements the step of determining the current traffic information of each lane in the specified road segment according to the physical positions of each vehicle target at different times, it may be specifically configured to implement the following steps:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section;
when the step of determining the road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section is implemented, the processor 1001 may be specifically configured to implement the following steps:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
Optionally, the processor 1001 may be further configured to implement the following steps:
displaying the traffic road condition of each lane in the specified road section on a high-precision map according to the traffic road condition grade of each lane in the specified road section;
alternatively, the first and second electrodes may be,
and sending the traffic road condition grade of each lane in the appointed road section to a platform server so that the platform server displays the traffic road condition of each lane in the appointed road section on a high-precision map according to the traffic road condition grade of each lane in the appointed road section.
The memory 1002 and the processor 1001 may be connected by a wire or a wireless connection for data transmission, and the camera may communicate with other devices through a wire or a wireless communication interface.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In this embodiment, the processor can realize that: the camera obtains a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, converts physical positions of the vehicle targets under a world coordinate system at the moment of collecting each frame of video according to the relative positions of the vehicle targets in each frame of video aiming at each vehicle target, determines current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, and analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
In addition, an embodiment of the present invention provides a machine-readable storage medium for storing a computer program, where the computer program causes a processor to execute the steps of the traffic condition analysis method applied to a camera provided in the embodiment of the present invention.
In this embodiment, the machine-readable storage medium stores a computer program that executes the traffic condition analysis method applied to the camera according to the embodiment of the present invention when running, so that it is possible to implement: the camera obtains a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, converts physical positions of the vehicle targets under a world coordinate system at the moment of collecting each frame of video according to the relative positions of the vehicle targets in each frame of video aiming at each vehicle target, determines current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, and analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
Corresponding to the above method embodiment, the embodiment of the present invention provides a video camera, as shown in fig. 11, including a processor 1101 and a memory 1102;
the memory 1102 is used for storing computer programs;
the processor 1101 is configured to execute the computer program stored in the memory 1102, and implement the following steps:
acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and sending the current traffic information of each lane in the specified road section to a platform server so that the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
An embodiment of the present invention further provides a platform server, as shown in fig. 12, including a processor 1201 and a memory 1202;
the memory 1202 for storing computer programs;
the processor 1201 is configured to execute the computer program stored in the memory 1202, and implement the following steps:
receiving current traffic information of each lane in an appointed road section, which is sent by a camera, wherein the current traffic information is determined by the camera according to the physical positions of each vehicle target at different moments in a multi-frame video acquired in real time;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Optionally, the processor 1201 may be further configured to implement the following steps:
and displaying the traffic road conditions of all lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
Data transmission between the memory 1102 and the processor 1101, and between the memory 1202 and the processor 1201 can be performed by wired connection or wireless connection, and the video camera can communicate with a device such as a platform server through a wired communication interface or a wireless communication interface.
The memory may include RAM, or may include NVM, such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor can be a general processor, including a CPU, an NP, etc.; but also DSPs, ASICs, FPGAs or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In this embodiment, the processor can realize that: the method comprises the steps that a camera acquires a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, for each vehicle target, the physical position of the vehicle target under a world coordinate system at the moment of collecting each frame of video is converted according to the relative position of the vehicle target in each frame of video, the current traffic information of each lane in the specified road section is determined according to the physical position of each vehicle target at different moments, the current traffic information of each lane in the specified road section is sent to a platform server, and the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
In addition, an embodiment of the present invention provides a machine-readable storage medium for storing a computer program, where the computer program causes a processor to execute the steps of the traffic condition analysis method applied to a camera provided in the embodiment of the present invention.
The embodiment of the invention provides a machine-readable storage medium for storing a computer program, wherein the computer program causes a processor to execute the steps of the traffic road condition analysis method applied to a platform server provided by the embodiment of the invention.
In this embodiment, the machine-readable storage medium stores a computer program that executes the traffic condition analysis method applied to the camera and the platform server provided in the embodiment of the present invention when running, so that it is possible to implement: the method comprises the steps that a camera acquires a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, for each vehicle target, the physical position of the vehicle target under a world coordinate system at the moment of collecting each frame of video is converted according to the relative position of the vehicle target in each frame of video, the current traffic information of each lane in the specified road section is determined according to the physical position of each vehicle target at different moments, the current traffic information of each lane in the specified road section is sent to a platform server, and the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
An embodiment of the present invention provides a platform server, as shown in fig. 13, including a processor 1301 and a memory 1302;
the memory 1302 is used for storing computer programs;
the processor 1301 is configured to execute the computer program stored in the memory 1302, and implement the following steps:
acquiring a multi-frame video acquired by a camera in real time;
identifying relative positions of a plurality of vehicle targets which travel on a specified road section in each frame of video;
aiming at each vehicle target, acquiring the physical position of the vehicle target in a world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
Optionally, the current traffic information may include a current road segment average vehicle speed;
when the step of determining the current traffic information of each lane in the specified road segment according to the physical positions of each vehicle target at different times is implemented, the processor 1301 may specifically be configured to implement the following steps:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
and calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section.
Optionally, when the step of analyzing and obtaining the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section is implemented, the processor 1301 may be specifically configured to implement the following steps:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
Optionally, the current traffic information may include a current road segment average vehicle speed; the preset traffic information may include a preset vehicle speed;
when the step of determining the road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section is implemented, the processor 1301 may be specifically configured to implement the following steps:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
Optionally, the processor 1301 may further be configured to implement the following steps:
and displaying the traffic road conditions of all lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
Data transmission between the memory 1302 and the processor 1301 can be performed through a wired connection or a wireless connection, and the platform server can communicate with a device such as a camera through a wired communication interface or a wireless communication interface.
The memory may include RAM, or may include NVM, such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor can be a general processor, including a CPU, an NP, etc.; but also DSPs, ASICs, FPGAs or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In this embodiment, the processor can realize that: the method comprises the steps that a camera acquires multi-frame videos collected in real time, the multi-frame videos are sent to a platform server, the platform server can identify the relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, for each vehicle target, the physical positions of the vehicle targets in a world coordinate system at the moment of collecting each frame of video are converted according to the relative positions of the vehicle targets in each frame of video, the current traffic flow information of each lane in the specified road section is determined according to the physical positions of the vehicle targets at different moments, and the traffic road condition grade of each lane in the specified road section is obtained through analysis according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
In addition, an embodiment of the present invention provides a machine-readable storage medium, which is used for storing a computer program, and the computer program causes a processor to execute the steps of the traffic condition analysis method provided by the embodiment of the present invention.
In this embodiment, the machine-readable storage medium stores a computer program for executing the traffic condition analysis method provided in the embodiment of the present invention when the computer program runs, so that the following can be implemented: the method comprises the steps that a camera acquires multi-frame videos collected in real time, the multi-frame videos are sent to a platform server, the platform server can identify the relative positions of a plurality of vehicle targets running on a specified road section in each frame of video, for each vehicle target, the physical positions of the vehicle targets in a world coordinate system at the moment of collecting each frame of video are converted according to the relative positions of the vehicle targets in each frame of video, the current traffic flow information of each lane in the specified road section is determined according to the physical positions of the vehicle targets at different moments, and the traffic road condition grade of each lane in the specified road section is obtained through analysis according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
The embodiment of the present invention further provides a traffic condition analysis system, as shown in fig. 14, including a plurality of cameras 1401 and a platform server 1402;
the camera 1401 is used for acquiring multiple frames of videos collected in real time and relative positions of multiple vehicle targets running on a specified road section in each frame of video; aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video; determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments; sending current traffic information of each lane in the specified road section to a platform server;
the platform server 1402 is configured to receive current traffic information of each lane in the specified road section, which is sent by the camera; and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
By applying the embodiment, the camera acquires multi-frame videos acquired in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame video, converts physical positions of the vehicle targets under a world coordinate system at the moment of acquiring each frame video according to the relative positions of the vehicle targets in each frame video aiming at each vehicle target, determines current traffic flow information of each lane in the specified road section according to the physical positions of each vehicle target at different moments, sends the current traffic flow information of each lane in the specified road section to the platform server, and the platform server analyzes and obtains the traffic grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
The embodiment of the present invention further provides a traffic accident information collecting system, as shown in fig. 15, including a plurality of cameras 1501 and a platform server 1502;
the camera 1501 is used for acquiring a video;
the platform server 1502 is configured to obtain a multi-frame video acquired by a camera in real time; identifying relative positions of a plurality of vehicle targets which travel on a specified road section in each frame of video; aiming at each vehicle target, acquiring the physical position of the vehicle target in a world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video; determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments; and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
By applying the embodiment, the camera acquires multi-frame videos acquired in real time, the multi-frame videos are sent to the platform server, the platform server can identify the relative positions of a plurality of vehicle targets running on the specified road section in each frame of video, for each vehicle target, the physical positions of the vehicle targets under the world coordinate system at the moment of acquiring each frame of video are converted according to the relative positions of the vehicle targets in each frame of video, the current traffic flow information of each lane in the specified road section is determined according to the physical positions of the vehicle targets at different moments, and the traffic road condition grade of each lane in the specified road section is obtained through analysis according to the current traffic flow information of each lane in the specified road section. The video acquisition is carried out on the appointed road section by utilizing the camera, the physical positions of all vehicle targets in the appointed road section under a world coordinate system can be accurately acquired by acquiring videos in real time, the converted traffic flow information can accurately represent the road condition of all lanes in the appointed road section due to the fact that the physical positions of the vehicles are related to time, the physical positions of the vehicle targets are converted by utilizing the videos acquired in real time, therefore, the real-time traffic road condition is analyzed, and the real-time analysis of the traffic road condition is realized.
As for the embodiments of the apparatus, the camera, the platform server, the machine-readable storage medium and the traffic condition analysis system, the content of the related method is substantially similar to that of the foregoing method embodiment, so that the description is relatively simple, and the related points can be referred to the partial description of the method embodiment.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the embodiments of the apparatus, the camera, the platform server, the machine-readable storage medium and the traffic condition analysis system, since they are substantially similar to the embodiments of the method, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (20)

1. A traffic road condition analysis method is applied to a camera, and the method comprises the following steps:
acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
2. The method according to claim 1, wherein for each vehicle object, converting the physical position of the vehicle object in the world coordinate system at the time of capturing each frame of video according to the relative position of the vehicle object in each frame of video comprises:
for each vehicle target in each frame video, determining the relative positions of at least three reference targets which are closest to the vehicle target in the frame video according to the relative position of the vehicle target in the frame video;
searching the physical positions of the at least three reference targets in a world coordinate system according to the relative positions of the at least three reference targets and the pre-stored corresponding relationship between the relative positions and the physical positions of the reference targets;
establishing a transformation matrix of a video coordinate system and the world coordinate system according to the relative positions and the physical positions of the at least three reference targets;
and converting the physical position of the vehicle target in the world coordinate system according to the relative position of the vehicle target in the frame of video and the transformation matrix.
3. The method according to claim 1, wherein for each vehicle object, converting the physical position of the vehicle object in the world coordinate system at the time of capturing each frame of video according to the relative position of the vehicle object in each frame of video comprises:
aiming at each vehicle target in each frame of video, determining a target calibration area to which the vehicle target belongs according to the relative position of the vehicle target in the frame of video and each calibration area divided in advance based on the relative position of each reference target;
and converting the physical position of the vehicle target in a world coordinate system according to the relative position of the vehicle target in the frame of video and the homography matrix corresponding to the target calibration area acquired in advance.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
acquiring the equipment parameters of the camera;
determining a high-precision map matched with the camera according to the equipment parameters;
identifying a target located at a specified relative position in the video, and acquiring a physical position of the target in the high-precision map;
and determining the relative position of each reference target in the video according to the position relation between the target and the peripheral reference target.
5. The method according to claim 1, wherein for each vehicle object, converting the physical position of the vehicle object in the world coordinate system at the time of capturing each frame of video according to the relative position of the vehicle object in each frame of video comprises:
acquiring equipment parameters of the camera, wherein the equipment parameters comprise a field angle, an erection height value and longitude and latitude;
for each vehicle target in each frame of video, determining a PT coordinate of the camera when the camera is over against the vehicle target according to the relative position of the vehicle target in the frame of video and the field angle, and taking the PT coordinate as a first P coordinate and a first T coordinate;
acquiring a P coordinate of the camera when the camera points to a specified direction, and taking the P coordinate as a second P coordinate;
calculating the difference between the first P coordinate and the second P coordinate to be used as the horizontal included angle between the vehicle target and the designated direction;
calculating the product of the tangent value of the first T coordinate and the erection height value as the horizontal distance between the vehicle target and the camera;
according to the horizontal included angle and the horizontal distance, calculating the longitude and latitude distance between the vehicle target and the camera through a trigonometric function;
and calculating the physical position of the vehicle target under a world coordinate system according to the longitude and latitude of the camera and the longitude and latitude distance.
6. The method as claimed in claim 1, wherein the analyzing the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section comprises:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
7. The method of claim 6, wherein the current traffic information includes a current road segment average vehicle speed; the preset traffic flow information comprises a preset vehicle speed;
the determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments comprises the following steps:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section;
determining a road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section, wherein the determining comprises the following steps:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
8. The method of claim 1, wherein after analyzing the traffic road condition grade of each lane in the specified road segment according to the current traffic information of each lane in the specified road segment, the method further comprises:
displaying the traffic road condition of each lane in the specified road section on a high-precision map according to the traffic road condition grade of each lane in the specified road section;
alternatively, the first and second electrodes may be,
and sending the traffic road condition grade of each lane in the appointed road section to a platform server so that the platform server displays the traffic road condition of each lane in the appointed road section on a high-precision map according to the traffic road condition grade of each lane in the appointed road section.
9. A traffic road condition analysis method is applied to a camera, and the method comprises the following steps:
acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video;
aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and sending the current traffic information of each lane in the specified road section to a platform server so that the platform server analyzes and obtains the traffic road condition grade of each lane in the specified road section according to the current traffic information of each lane in the specified road section.
10. A traffic road condition analysis method is applied to a platform server, and the method comprises the following steps:
receiving current traffic information of each lane in an appointed road section, which is sent by a camera, wherein the current traffic information is determined by the camera according to the physical positions of each vehicle target at different moments in a multi-frame video acquired in real time;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
11. The method of claim 10, wherein after analyzing the traffic road condition grade of each lane in the specified road segment according to the current traffic information of each lane in the specified road segment, the method further comprises:
and displaying the traffic road conditions of all lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
12. A traffic condition analysis method is characterized by comprising the following steps:
acquiring a multi-frame video acquired by a camera in real time;
identifying relative positions of a plurality of vehicle targets which travel on a specified road section in each frame of video;
aiming at each vehicle target, acquiring the physical position of the vehicle target in a world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video;
determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments;
and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
13. The method of claim 12, wherein the current traffic information includes a current road segment average vehicle speed;
the determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments comprises the following steps:
aiming at each vehicle target, calculating the average speed of the vehicle target on the specified road section according to the time of at least two frames of videos and the physical position of the vehicle target under the world coordinate system corresponding to the time of the at least two frames of videos;
and calculating the current road section average speed of each lane in the specified road section according to the average speed of each vehicle target in the specified road section.
14. The method as claimed in claim 12, wherein the analyzing the traffic road condition grade of each lane in the specified road segment according to the current traffic information of each lane in the specified road segment comprises:
acquiring preset traffic information of the specified road section;
determining a road traffic jam index of each lane in the specified road section according to the preset traffic flow information and the current traffic flow information of each lane in the specified road section;
and determining the traffic road condition grade of each lane in the specified road section by utilizing the corresponding relation between the pre-stored road traffic congestion index and the traffic road condition grade according to the road traffic congestion index of each lane in the specified road section.
15. The method of claim 14, wherein the current traffic information includes a current road segment average vehicle speed; the preset traffic flow information comprises a preset vehicle speed;
determining a road traffic congestion index of each lane in the specified road section according to the preset traffic information and the current traffic information of each lane in the specified road section, wherein the determining comprises the following steps:
and determining the road traffic jam index of each lane in the specified road section according to the preset vehicle speed and the current road section average vehicle speed of each lane in the specified road section.
16. The method of claim 12, wherein after analyzing the traffic road condition grade of each lane in the specified road segment according to the current traffic information of each lane in the specified road segment, the method further comprises:
and displaying the traffic road conditions of all lanes in the appointed road section on a high-precision map according to the traffic road condition grades of all lanes in the appointed road section.
17. A camera comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, and implement the traffic condition analysis method according to claims 1-8, or implement the traffic condition analysis method according to claim 9.
18. A platform server comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, to implement the traffic condition analysis method according to claims 10-11, or to implement the traffic condition analysis method according to claims 12-16.
19. A traffic road condition analysis system is characterized by comprising a plurality of cameras and a platform server;
the camera is used for acquiring a plurality of frames of videos collected in real time and relative positions of a plurality of vehicle targets running on a specified road section in each frame of video; aiming at each vehicle target, converting the physical position of the vehicle target under the world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video; determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments; sending current traffic information of each lane in the specified road section to a platform server;
the platform server is used for receiving current traffic information of each lane in the specified road section sent by the camera; and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
20. A traffic accident information acquisition system is characterized by comprising a plurality of cameras and a platform server;
the camera is used for acquiring a video;
the platform server is used for acquiring multi-frame videos acquired by the camera in real time; identifying relative positions of a plurality of vehicle targets which travel on a specified road section in each frame of video; aiming at each vehicle target, acquiring the physical position of the vehicle target in a world coordinate system at the moment of acquiring each frame of video according to the relative position of the vehicle target in each frame of video; determining the current traffic information of each lane in the specified road section according to the physical positions of each vehicle target at different moments; and analyzing to obtain the traffic road condition grade of each lane in the specified road section according to the current traffic flow information of each lane in the specified road section.
CN201811478666.2A 2018-12-05 2018-12-05 Traffic road condition analysis method, system and camera Pending CN111275960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811478666.2A CN111275960A (en) 2018-12-05 2018-12-05 Traffic road condition analysis method, system and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811478666.2A CN111275960A (en) 2018-12-05 2018-12-05 Traffic road condition analysis method, system and camera

Publications (1)

Publication Number Publication Date
CN111275960A true CN111275960A (en) 2020-06-12

Family

ID=71000136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811478666.2A Pending CN111275960A (en) 2018-12-05 2018-12-05 Traffic road condition analysis method, system and camera

Country Status (1)

Country Link
CN (1) CN111275960A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069944A (en) * 2020-08-25 2020-12-11 青岛海信网络科技股份有限公司 Road congestion level determination method
CN112329722A (en) * 2020-11-26 2021-02-05 上海西井信息科技有限公司 Driving direction detection method, system, equipment and storage medium
CN112330827A (en) * 2020-10-13 2021-02-05 北京精英路通科技有限公司 Parking charging method and device
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
CN112907958A (en) * 2021-01-29 2021-06-04 北京百度网讯科技有限公司 Road condition information determining method and device, electronic equipment and readable medium
CN112991742A (en) * 2021-04-21 2021-06-18 四川见山科技有限责任公司 Visual simulation method and system for real-time traffic data
CN113112827A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic control method and intelligent traffic control system
CN113470353A (en) * 2021-06-17 2021-10-01 新奇点智能科技集团有限公司 Traffic grade determination method and device, storage medium and electronic equipment
CN113469026A (en) * 2021-06-30 2021-10-01 上海智能交通有限公司 Intersection retention event detection method and system based on machine learning
CN113763425A (en) * 2021-08-30 2021-12-07 青岛海信网络科技股份有限公司 Road area calibration method and electronic equipment
CN114067553A (en) * 2020-07-30 2022-02-18 英研智能移动股份有限公司 Traffic condition notification system and method
CN114677126A (en) * 2022-05-27 2022-06-28 深圳市一指淘科技有限公司 Public transport comprehensive regulation and control system for smart city based on multi-source data
WO2023151034A1 (en) * 2022-02-11 2023-08-17 华为技术有限公司 Traffic condition detection method, readable medium and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030484A (en) * 2002-06-28 2004-01-29 Mitsubishi Heavy Ind Ltd Traffic information providing system
CN1645037A (en) * 2004-12-31 2005-07-27 天津大学 Partitioned pointing method for three-dimensional scanning measurement system by light band method
CN103280098A (en) * 2013-05-23 2013-09-04 北京交通发展研究中心 Traffic congestion index calculation method
CN104835328A (en) * 2015-05-29 2015-08-12 徐承柬 Traffic flow display system and method
CN105427626A (en) * 2015-12-19 2016-03-23 长安大学 Vehicle flow statistics method based on video analysis
CN105761520A (en) * 2014-12-17 2016-07-13 上海宝康电子控制工程有限公司 System for realizing adaptive induction of traffic route
CN105989593A (en) * 2015-02-12 2016-10-05 杭州海康威视系统技术有限公司 Method and device for measuring speed of specific vehicle in video record
CN107301776A (en) * 2016-10-09 2017-10-27 上海炬宏信息技术有限公司 Track road conditions processing and dissemination method based on video detection technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030484A (en) * 2002-06-28 2004-01-29 Mitsubishi Heavy Ind Ltd Traffic information providing system
CN1645037A (en) * 2004-12-31 2005-07-27 天津大学 Partitioned pointing method for three-dimensional scanning measurement system by light band method
CN103280098A (en) * 2013-05-23 2013-09-04 北京交通发展研究中心 Traffic congestion index calculation method
CN105761520A (en) * 2014-12-17 2016-07-13 上海宝康电子控制工程有限公司 System for realizing adaptive induction of traffic route
CN105989593A (en) * 2015-02-12 2016-10-05 杭州海康威视系统技术有限公司 Method and device for measuring speed of specific vehicle in video record
CN104835328A (en) * 2015-05-29 2015-08-12 徐承柬 Traffic flow display system and method
CN105427626A (en) * 2015-12-19 2016-03-23 长安大学 Vehicle flow statistics method based on video analysis
CN107301776A (en) * 2016-10-09 2017-10-27 上海炬宏信息技术有限公司 Track road conditions processing and dissemination method based on video detection technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙宇臣 等: "采用线性分区方法对三维传感器的标定", 《电子 激光》 *
李晓洁: "提高激光三维人体扫描系统性能的关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067553A (en) * 2020-07-30 2022-02-18 英研智能移动股份有限公司 Traffic condition notification system and method
CN112069944B (en) * 2020-08-25 2024-04-05 青岛海信网络科技股份有限公司 Road congestion level determining method
CN112069944A (en) * 2020-08-25 2020-12-11 青岛海信网络科技股份有限公司 Road congestion level determination method
CN112330827A (en) * 2020-10-13 2021-02-05 北京精英路通科技有限公司 Parking charging method and device
CN112329722A (en) * 2020-11-26 2021-02-05 上海西井信息科技有限公司 Driving direction detection method, system, equipment and storage medium
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
CN112907958A (en) * 2021-01-29 2021-06-04 北京百度网讯科技有限公司 Road condition information determining method and device, electronic equipment and readable medium
CN113112827A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic control method and intelligent traffic control system
CN113112827B (en) * 2021-04-14 2022-03-25 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic control method and intelligent traffic control system
CN112991742A (en) * 2021-04-21 2021-06-18 四川见山科技有限责任公司 Visual simulation method and system for real-time traffic data
CN113470353A (en) * 2021-06-17 2021-10-01 新奇点智能科技集团有限公司 Traffic grade determination method and device, storage medium and electronic equipment
CN113469026A (en) * 2021-06-30 2021-10-01 上海智能交通有限公司 Intersection retention event detection method and system based on machine learning
CN113763425A (en) * 2021-08-30 2021-12-07 青岛海信网络科技股份有限公司 Road area calibration method and electronic equipment
WO2023151034A1 (en) * 2022-02-11 2023-08-17 华为技术有限公司 Traffic condition detection method, readable medium and electronic device
CN114677126A (en) * 2022-05-27 2022-06-28 深圳市一指淘科技有限公司 Public transport comprehensive regulation and control system for smart city based on multi-source data

Similar Documents

Publication Publication Date Title
CN111275960A (en) Traffic road condition analysis method, system and camera
CN105793669B (en) Vehicle position estimation system, device, method, and camera device
Tao et al. Lane marking aided vehicle localization
JP6781711B2 (en) Methods and systems for automatically recognizing parking zones
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
CN110954128B (en) Method, device, electronic equipment and storage medium for detecting lane line position change
JP3958133B2 (en) Vehicle position measuring apparatus and method
EP3842735B1 (en) Position coordinates estimation device, position coordinates estimation method, and program
CN111754581A (en) Camera calibration method, roadside sensing equipment and intelligent traffic system
JP2017519973A (en) Method and system for determining position relative to a digital map
CN104280036A (en) Traffic information detection and positioning method, device and electronic equipment
Murali et al. Smartphone-based crosswalk detection and localization for visually impaired pedestrians
US20200162724A1 (en) System and method for camera commissioning beacons
CN110018503B (en) Vehicle positioning method and positioning system
CN110135216B (en) Method and device for detecting lane number change area in electronic map and storage equipment
CN114755662A (en) Calibration method and device for laser radar and GPS with road-vehicle fusion perception
CN115164918A (en) Semantic point cloud map construction method and device and electronic equipment
CN111275957A (en) Traffic accident information acquisition method, system and camera
CN112446915B (en) Picture construction method and device based on image group
KR102273506B1 (en) Method, device and computer-readable storage medium with instructions for determinig the position of data detected by a motor vehicle
CN112255604A (en) Method and device for judging accuracy of radar data and computer equipment
RU2606521C1 (en) Method and system for vehicle average speed determining
CN109345576B (en) Vehicle running speed identification method and system
CN116045964A (en) High-precision map updating method and device
JP6031915B2 (en) Image processing apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612

RJ01 Rejection of invention patent application after publication