CN111435565A - Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium - Google Patents

Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111435565A
CN111435565A CN201811605062.XA CN201811605062A CN111435565A CN 111435565 A CN111435565 A CN 111435565A CN 201811605062 A CN201811605062 A CN 201811605062A CN 111435565 A CN111435565 A CN 111435565A
Authority
CN
China
Prior art keywords
road
vehicle
image data
detected
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811605062.XA
Other languages
Chinese (zh)
Inventor
吴腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811605062.XA priority Critical patent/CN111435565A/en
Publication of CN111435565A publication Critical patent/CN111435565A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a road traffic state detection method, a road traffic state detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring target image data of a road to be detected; determining the image position of each vehicle in the target image data by using a computer vision technology; determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data; and determining the traffic state of the road to be detected according to the actual geographical position of each vehicle. In the road traffic state detection method of the embodiment of the application, the actual geographic position of each vehicle in the road to be detected is determined by using the image data of the road to be detected, and each vehicle in the road to be detected can be counted, so that the traffic state of the road to be detected is evaluated.

Description

Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium
Technical Field
The present application relates to the field of traffic monitoring technologies, and in particular, to a method and an apparatus for detecting a road traffic state, an electronic device, and a storage medium.
Background
With the increase of the automobile holding capacity, traffic jam and other situations frequently occur, and particularly during rush hours, the traffic jam problem becomes more severe. The road traffic state is detected, so that people can be helped to avoid congested routes and relieve traffic congestion pressure.
In the related art, a road traffic condition is detected by detecting a vehicle speed on a road. Specifically, a detection point is set on the designated road, the average traveling speed of the vehicle passing through the detection point for a period of time is checked, and when the average traveling speed of the vehicle is lower than a certain threshold value, the designated road is determined to be congested.
However, in the above method, even in a special environmental condition, for example, in a special weather such as rainy days or heavy fog, the traveling speed of the vehicle may be lower than the set threshold, but the road congestion is not congested at this time, so the above method may generate false alarm of the traffic congestion and the accuracy of the road traffic state detection is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for detecting a road traffic status, an electronic device, and a storage medium, so as to increase the accuracy of detecting a traffic status. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for detecting a road traffic state, where the method includes:
acquiring target image data of a road to be detected;
determining the image position of each vehicle in the target image data by using a computer vision technology;
determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data;
and determining the traffic state of the road to be detected according to the actual geographical position of each vehicle.
Optionally, the acquiring image data of the road to be detected includes:
acquiring images of all road sections in a road to be detected, which are acquired by all lenses respectively;
and splicing the images of the road sections to obtain target image data of the road to be detected.
Optionally, the determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data includes:
and mapping the image positions of the vehicles in the target image data into the longitude and latitude of the vehicles in the actual road according to the corresponding relation between the positions in the predetermined image data and the longitude and latitude in the actual road to obtain the actual geographic positions of the vehicles in the road to be detected.
Optionally, the step of predetermining the corresponding relationship between each position in the image data and the latitude in the actual road includes:
acquiring the longitude and latitude of a plurality of specified calibration points in an actual road;
determining the position of each designated index point in the image data;
and determining the corresponding relation between each position in the image data and the longitude and latitude in the actual road according to the longitude and latitude of each designated index point and the position of the designated index point in the image data.
Optionally, the determining the traffic state of the road to be detected according to the actual geographic position of each vehicle includes:
and determining the traffic state of each lane in the road to be detected according to the actual geographic position of each vehicle.
Optionally, the method for detecting a road traffic state in the embodiment of the present application further includes:
and displaying the actual geographical position of each vehicle on an electronic map according to a preset display frequency, wherein the preset display frequency is not more than the frame rate of the target image data.
Optionally, the method for detecting a road traffic state in the embodiment of the present application further includes:
marking each vehicle with a unique identifier;
tracking each of the vehicles based on the identity of each of the vehicles.
In a second aspect, an embodiment of the present application provides a road traffic state detection device, where the device includes:
the target image acquisition module is used for acquiring target image data of a road to be detected;
the target image analysis module is used for determining the image position of each vehicle in the target image data by utilizing a computer vision technology;
the vehicle position determining module is used for determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data;
and the traffic condition analysis module is used for determining the traffic state of the road to be detected according to the actual geographic position of each vehicle.
Optionally, the target image obtaining module includes:
the road section image acquisition submodule is used for acquiring images of all road sections in the road to be detected, which are acquired by all the lenses respectively;
and the target image synthesis submodule is used for splicing the images of all the road sections to obtain the target image data of the road to be detected.
Optionally, the vehicle position determining module is specifically configured to:
and mapping the image positions of the vehicles in the target image data into the longitude and latitude of the vehicles in the actual road according to the corresponding relation between the positions in the predetermined image data and the longitude and latitude in the actual road to obtain the actual geographic positions of the vehicles in the road to be detected.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
the calibration point position acquisition module is used for acquiring the longitude and latitude of a plurality of specified calibration points in an actual road;
the image data analysis module is used for determining the position of each designated calibration point in the image data;
and the second corresponding relation determining module is used for determining the corresponding relation between each position in the image data and the longitude and latitude in the actual road according to the longitude and latitude of each designated calibration point and the position of the designated calibration point in the image data.
Optionally, the traffic condition analysis module is specifically configured to:
and determining the traffic state of each lane in the road to be detected according to the actual geographic position of each vehicle.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
and the vehicle position display module is used for displaying the actual geographic position of each vehicle on the electronic map according to a preset display frequency, wherein the preset display frequency is not more than the frame rate of the target image data.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
the vehicle calibration module is used for marking each vehicle with a unique identifier;
and the vehicle tracking module is used for tracking each vehicle according to the identification of each vehicle.
In a third aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the method for detecting a road traffic condition according to any one of the first aspect described above when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method for detecting a road traffic state according to any one of the above first aspects is implemented.
The method, the device, the electronic equipment and the storage medium for detecting the road traffic state, provided by the embodiment of the application, are used for acquiring target image data of a road to be detected; determining the image position of each vehicle in the target image data by using a computer vision technology; determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data; and determining the traffic state of the road to be detected according to the actual geographical position of each vehicle. The actual geographic position of each vehicle in the road to be detected is determined by utilizing the image data of the road to be detected, and each vehicle in the road to be detected can be counted, so that the traffic state of the road to be detected is evaluated. Meanwhile, the actual geographical position of each vehicle can be obtained, and when a traffic problem occurs, the position of the place where the traffic problem occurs can be reported to help related departments to manage traffic order. The traffic flow of the road to be detected can be known in detail, and more reasonable traffic planning can be formulated. Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a road traffic state detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a dual-lens camera according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of computing latitude and longitude distances in accordance with an embodiment of the present application;
FIG. 4 is another schematic diagram of a road traffic status detection method according to an embodiment of the present application;
FIG. 5 is a schematic view of a road traffic condition detecting device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic map according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to improve the accuracy of traffic state detection, an embodiment of the present application provides a road traffic state detection method, and referring to fig. 1, the method includes:
s101, acquiring target image data of a road to be detected.
The method for detecting the road traffic state in the embodiment of the application can be realized through detection equipment, and specifically, the detection equipment can be a camera or a server.
The detection equipment acquires target image data of a road to be detected through a camera, and the target image data can be a video stream. For the road to be detected, in order to improve the identifiability of the image data, a multi-lens camera may be used to collect the image data, and certainly, a single-lens camera with high resolution performance may be used, or a single-lens camera may be set at every specified distance in the road to be detected. The detection device acquires target image data of a road to be detected, for example, images of each section of the road to be detected are acquired through a plurality of single-lens cameras, and the images are spliced to obtain the target image data. Of course, the detection device may also directly use the image captured by the monocular camera with high resolution performance as the target image data.
Optionally, the acquiring image data of the road to be detected includes:
and S1011, acquiring images of all road sections in the road to be detected, which are respectively acquired by all the lenses.
And S1012, splicing the images of the road sections to obtain target image data of the road to be detected.
The detection device can respectively collect images of different road sections in a road to be detected through each lens of the multi-lens camera, and can also respectively collect images of different road sections in the road to be detected through each lens of the multiple single-lens cameras, the detection device splices the images of the road sections according to the position of the actual road (the actual road to be detected) to obtain image data of the road to be detected, or the multi-lens cameras transmit the collected images of the road sections to a server, and the server completes splicing of target image data.
And S102, determining the image position of each vehicle in the target image data by using a computer vision technology.
The detection device analyzes the target image data by using a computer vision technology, detects each vehicle in the target image data, and calibrates the position of each vehicle in the target image data, thereby determining the position of each vehicle in the target image data. The computer vision technology is any relevant technology for calibrating the position of the vehicle by using a computer, for example, a convolutional neural network for calibrating the position of the vehicle, and the like.
And S103, determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data.
And the detection equipment obtains the actual geographic position of each vehicle in the road to be detected according to the position of each vehicle in the target image. For example, the detection device converts the coordinates of each vehicle in the image data coordinate system (i.e., the coordinate system of the image capturing device that captures the target image data) into the coordinates of each vehicle in the road coordinate system to be detected, thereby determining the actual geographic position of each vehicle in the road to be detected.
The coordinates of the target image data may be converted to coordinates in the actual road according to the device parameters of the camera that collects the target image data. The method includes acquiring GPS (Global Positioning System) Positioning information of a camera for acquiring target image data, wherein in order to ensure calculation accuracy, the accuracy of the GPS Positioning information is within decimeter, and the GPS Positioning information of the camera can be acquired through a GPS device in the camera. The pitch angle (which can be detected by a gyroscope built in the camera) and the erection height of the camera are obtained, and a conversion formula from a camera image coordinate system to an actual road (real world) coordinate system is calculated through information such as a GPS (global positioning system), the pitch angle, a view angle, the erection height and a focal length. The coordinate conversion can also be carried out in a manual calibration mode, namely, a plurality of reference points are selected in the image, the GPS information of the reference points in the actual road is measured, the position relation between the GPS information and the reference points in the image is established, and the GPS information of each position in the image is determined according to the relation. The method aims at a scene that target image data are collected by a single camera, when the target image data are formed by splicing a plurality of images, calculation of coordinate system conversion needs to be carried out on each image, and the calculation method for carrying out the coordinate system conversion on each image is the same as or similar to the calculation method for carrying out the coordinate system conversion, and is not repeated here.
And S104, determining the traffic state of the road to be detected according to the actual geographical position of each vehicle.
The detection equipment analyzes the actual geographic position of each vehicle, and determines data such as the number of vehicles in the road to be detected, the vehicle queuing length, the road space occupancy and the like, so that the traffic state of the road to be detected is obtained.
In the embodiment of the application, the actual geographic position of each vehicle in the road to be detected is determined by using the image data of the road to be detected, and each vehicle in the road to be detected can be counted, so that the traffic state of the road to be detected is evaluated. Meanwhile, the actual geographical position of each vehicle can be obtained, and when a traffic problem occurs, the position of the place where the traffic problem occurs can be reported to help related departments to manage traffic order. The traffic flow of the road to be detected can be known in detail, and more reasonable traffic planning can be formulated.
Optionally, the determining the traffic state of the road to be detected according to the actual geographic position of each vehicle includes:
and determining the traffic state of each lane in the road to be detected according to the actual geographical position of each vehicle.
In the embodiment of the application, data such as traffic flow, lane congestion degree, vehicle queuing length, lane space occupancy and the like of each lane in the road to be detected can be determined according to the actual geographic position of the vehicle, so that the traffic state of each lane in the road to be detected can be obtained. Compared with the traffic state of the road, the traffic state of each lane is more precise, and the method can be used for making more reasonable traffic plans.
Optionally, the method for detecting a road traffic state in the embodiment of the present application further includes:
and displaying the actual geographical position of each vehicle on an electronic map according to a preset display frequency, wherein the preset display frequency is not more than the frame rate of the target image data.
The detection equipment periodically displays the actual geographic position of each vehicle on the electronic map according to a preset display frequency. The preset display frequency can be set according to actual requirements, and the display frequency is not suitable to be greater than the frame rate of the target image data, because when the display frequency is greater than the frame rate of the target image data, the actual geographic position of the vehicle of the same frame of target image data is repeatedly displayed in the electronic map, and the display performance is wasted. In one embodiment, the preset display frequency may be the same as the frame rate of the target image data, and the detection device transmits the GPS information of the vehicle in each frame of the target image data to an electronic map for display, and displays a preset mark at a corresponding position in the electronic map according to the GPS information, where the mark represents the vehicle. And transmitting the data of each frame in real time, displaying the simulated vehicle position in real time by the electronic map, and displaying a dynamic animation effect on the platform, wherein the animation effect simulates the motion change of the vehicle in a real environment.
The electronic map of the embodiment of the application can mark the position of each vehicle in real time in a two-dimensional plane map of a road to be detected as shown in fig. 7; the electronic map may be a three-dimensional map, and the road and the vehicle to be detected are three-dimensionally displayed, and the position of each vehicle may be displayed in the existing map software in order to facilitate the operation by combining with the existing map software, such as a Baidu map or a Gade map. Of course, the electronic map may be automatically mapped according to the actual scene of the road to be detected, and details are not described here.
Specifically, the detection device may be a platform server, and the platform server acquires a traffic state of the road to be detected, may also acquire traffic data such as a traffic state of each lane in the road to be detected and a tracking result of each vehicle, and imports the traffic data into the electronic map. Therefore, the traffic conditions of all roads in the city, including specific GPS information of the road congestion points and specific vehicles causing congestion, can be seen through the electronic map.
Optionally, the method for detecting a road traffic state in the embodiment of the present application further includes:
step one, marking each vehicle with a unique identifier.
In the process of determining the position of each vehicle in the target image data by using the computer vision technology, each vehicle can be marked, and a unique identifier is allocated to each vehicle.
And secondly, tracking each vehicle according to the identification of each vehicle.
The vehicle identification can facilitate tracking of the vehicle, for example, clicking a mark of a certain vehicle on the platform, then displaying a running track of the vehicle, and the like. The identification may be assigned a unique identification of the vehicle by the camera by identifying a feature of the vehicle and associating the unique identification with GPS location information of the vehicle. Based on the identification of each vehicle, each vehicle is tracked in the subsequent monitoring video stream, and the parameters of the vehicle speed, the driving route and the like of each vehicle are determined, so that the traffic management and the planning are facilitated.
Optionally, the determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data includes:
the first step is to determine the image coordinates of each vehicle in the target image data according to the image position of each vehicle in the target image data.
And step two, determining a PT coordinate of the image acquisition equipment when the image acquisition equipment is over against the vehicle according to the image coordinate and the field angle of the image acquisition equipment, and taking the PT coordinate as a first P coordinate and a first T coordinate.
The image capturing device is a device for capturing target image data, and is described here by taking the target image data as single-lens capturing as an example, (when the target image data is composed of a plurality of sub-images, calculation may be performed for each sub-image, and a calculation method for each sub-image is the same as a calculation method for capturing the target image data as a single-lens capturing). For convenience of description, the P coordinate when the ball machine photographs the vehicle is referred to as a first P coordinate, and the T coordinate when the ball machine photographs the vehicle is referred to as a first T coordinate.
And step three, acquiring a P coordinate when the image acquisition equipment points to the specified direction as a second P coordinate.
The coordinate of the dome camera P when the dome camera points in the north, south, east, west, and the like can be obtained through an electronic compass of the dome camera, and the coordinate of the dome camera P is referred to as a second P coordinate for description.
And step four, calculating the difference between the first P coordinate and the second P coordinate to be used as the horizontal included angle between the vehicle and the specified direction.
And fifthly, calculating the horizontal distance between the vehicle and the image acquisition equipment based on the first T coordinate and the installation height of the image acquisition equipment.
The product of the tangent of the first T coordinate and the height of the ball machine may be calculated as the horizontal distance of the vehicle from the ball machine, where tanT × h is L, h represents the height of the ball machine, L represents the horizontal distance of the vehicle from the ball machine.
And sixthly, calculating the longitude and latitude distance between the vehicle and the image acquisition equipment through a trigonometric function according to the horizontal included angle and the horizontal distance.
In one embodiment, the specified direction in the third step is due north, and in this case, the product of the sine value of the horizontal angle and the horizontal distance is calculated as the longitude distance between the monitoring target and the ball machine, and the product of the cosine value of the horizontal angle and the horizontal distance is calculated as the latitude distance between the monitoring target and the ball machine, for example, as shown in fig. 3, the height of the ball machine is not shown in fig. 3, and as can be seen from fig. 3, L sin θ is Llon,L*cosθ=LlatL denotes the horizontal distance, theta denotes the horizontal angle of the vehicle to the true north, LlonIndicating the longitudinal distance of the monitored target from the ball machine, LlatAnd the latitude distance between the monitoring target and the ball machine is represented.
Or, the designated direction in the third step is the east, and the cosine value of the horizontal included angle and the horizontal angle are calculated at the momentCalculating the product of the sine value of the horizontal included angle and the horizontal distance as the latitude distance between the monitoring target and the ball machine, wherein the horizontal included angle between the vehicle and the east direction is α x sin α -L in figure 3lon,L*cosα=Llat
The designated direction is true west or true south, and the specific calculation process is similar and is not described any more.
And seventhly, calculating the longitude and latitude position of the vehicle based on the longitude and latitude of the image acquisition equipment and the longitude and latitude distance.
In the embodiment of the application, a method for determining the actual geographic position of each vehicle in the road to be detected according to the position of each vehicle in the target image data when the image acquisition equipment is a ball machine is provided, so that the actual geographic position of each vehicle can be conveniently determined.
Optionally, the determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data includes:
and mapping the image position of the vehicle in the target image data to the longitude and latitude of the vehicle in the actual road according to the corresponding relation between each position in the predetermined image data and the longitude and latitude in the actual road to obtain the actual geographic position of the vehicle in the road to be detected.
The corresponding relation between each position in the image data and the actual road mid-latitude can be pre-established, for example, when the image acquisition equipment for acquiring the image is a gunlock, the corresponding relation between each position in the image data and the actual road mid-latitude is pre-established in view of the fixed shooting angle of the gunlock, and the conversion can be directly carried out according to the corresponding relation when the conversion is needed, so that the calculation resources can be saved.
Optionally, the step of predetermining the corresponding relationship between each position in the image data and the latitude in the actual road includes:
acquiring equipment parameters of image acquisition equipment, wherein the equipment parameters of the image acquisition equipment comprise a focal length, a lens angle, a field angle, an installation height and longitude and latitude of the image acquisition equipment.
And step two, calculating the target distance between the target road area of the image data collected by the image collecting device and the image collecting device according to the focal length, the lens angle, the view angle and the erection height of the image collecting device.
And step three, calculating the longitude and latitude of the target road area according to the longitude and latitude of the image acquisition equipment and the target distance.
And step four, mapping the longitude and latitude of the target road area into the image data to obtain the corresponding relation between each position in the image data and the longitude and latitude in the actual road.
In the embodiment of the application, the distance (i.e., the target distance) between each position in the monitoring area (i.e., the target road area) of the image acquisition device and the image acquisition device can be calculated through a mathematical method, then the longitude and latitude of each position in the target road area are determined according to the longitude and latitude of the image acquisition device and the target distance, the longitude and latitude of each position in the target road area are mapped with the image data coordinate system, and the corresponding relation between each position in the image data and the longitude and latitude in the actual road is obtained.
Optionally, the step of predetermining the corresponding relationship between each position in the image data and the latitude in the actual road includes:
the method comprises the steps of firstly, acquiring the longitude and latitude of a plurality of specified calibration points in an actual road.
And selecting an object with high identification degree as an appointed calibration point in the actual road, and measuring the longitude and latitude of each appointed calibration point.
And step two, determining the position of each designated calibration point in the image data.
And acquiring image data of an actual road by using the bolt face, wherein the image data comprises images of all specified calibration points. The position of each designated calibration point is specified in the image data.
And step three, determining the corresponding relation between each position in the image data and the longitude and latitude in the actual road according to the longitude and latitude of each designated index point and the position of the designated index point in the image data.
According to the corresponding relation of each designated calibration point in the actual road and the image data, calculating the longitude and latitude of each point in the image data by affine transformation and other methods according to the longitude and latitude of each designated calibration point, thereby obtaining the corresponding relation of each position in the image data and the longitude and latitude in the actual road.
In the embodiment of the application, when the image is acquired by using the gunlock, the corresponding relation between each position in the image data and the longitude and latitude in the actual road is established by a method of calibration point mapping, and the calculation is simple.
The embodiment of the application also provides a road traffic state detection system, which is shown in fig. 4 and comprises a camera and a server.
The camera is used for: collecting image data of a road to be detected; detecting the position of the vehicle in the image data by using a computer vision technology; calculating GPS information of the vehicle according to the vehicle position in the image data; according to the GPS information of each vehicle, the traffic conditions of each lane in the road to be detected are counted, wherein the traffic conditions of the lanes comprise lane flow, lane congestion degree and vehicle queuing length; judging whether the lanes are congested or not according to the traffic conditions of the lanes; aiming at a congested lane, sending a congested lane GPS position, a congestion length, a queuing length and the like to a server; and sending the GPS information of each vehicle to a server.
The server is used for: and receiving GPS information, a congestion lane GPS position, congestion length, queue length and the like of each vehicle, and displaying the received data in the electronic map.
The embodiment of the present application further provides a road traffic state detection device, refer to fig. 5, and the device includes:
a target image obtaining module 501, configured to obtain target image data of a road to be detected;
a target image analysis module 502, configured to determine image positions of vehicles in the target image data by using a computer vision technology;
a vehicle position determining module 503, configured to determine an actual geographic position of each vehicle in the road to be detected according to an image position of each vehicle in the target image data;
and a traffic condition analysis module 504, configured to determine a traffic condition of the road to be detected according to the actual geographic location of each vehicle.
Optionally, the target image obtaining module 501 includes:
the road section image acquisition submodule is used for respectively acquiring images of all road sections in the road to be detected through all lenses of the multi-lens camera;
and the target image synthesis submodule is used for splicing the images of all the road sections to obtain the target image data of the road to be detected.
Optionally, the vehicle position determining module 503 includes:
a first coordinate determination submodule configured to determine image coordinates of each of the vehicles in the target image data according to a position of each of the vehicles in the target image data;
a second coordinate determination submodule, configured to determine, according to the image coordinate and a field angle of the image capturing device, a PT coordinate when the image capturing device faces the vehicle as a first P coordinate and a first T coordinate;
the third coordinate determination submodule is used for acquiring a P coordinate when the image acquisition equipment points to the designated direction and taking the P coordinate as a second P coordinate;
a horizontal included angle determining submodule, configured to calculate a difference between the first P coordinate and the second P coordinate, as a horizontal included angle between the vehicle and the designated direction;
a horizontal distance determination submodule for calculating a horizontal distance between the vehicle and the image capturing apparatus based on the first T coordinate and an installation height of the image capturing apparatus;
a longitude and latitude distance determination submodule for calculating the longitude and latitude distance between the vehicle and the image acquisition device through a trigonometric function according to the horizontal included angle and the horizontal distance;
and the longitude and latitude position determination submodule is used for calculating the longitude and latitude position of the vehicle based on the longitude and latitude of the image acquisition equipment and the longitude and latitude distance.
Optionally, the vehicle position determining module 503 is specifically configured to:
and mapping the image position of the vehicle in the target image data to the longitude and latitude of the vehicle in the actual road according to the corresponding relation between each position in the predetermined image data and the longitude and latitude in the actual road to obtain the actual geographic position of the vehicle in the road to be detected.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
the device parameter acquisition module is used for acquiring device parameters of image acquisition equipment for acquiring the target image data, wherein the device parameters of the image acquisition equipment comprise a focal length, a lens angle, a field angle, an installation height and a longitude and latitude of the image acquisition equipment;
the target distance determining module is used for calculating the target distance between a target road area of image data acquired by the image acquisition equipment and the image acquisition equipment according to the focal length, the lens angle, the view angle and the erection height of the image acquisition equipment;
the area longitude and latitude determining module is used for calculating the longitude and latitude of the target road area according to the longitude and latitude of the image acquisition equipment and the target distance;
and the first corresponding relation determining module is used for mapping the longitude and latitude of the target road area into the image data to obtain the corresponding relation between each position in the image data and the longitude and latitude in the actual road.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
the calibration point position acquisition module is used for acquiring the longitude and latitude of a plurality of specified calibration points in an actual road;
the image data analysis module is used for determining the position of each designated calibration point in the image data;
and the second corresponding relation determining module is used for determining the corresponding relation between each position in the image data and the longitude and latitude in the actual road according to the longitude and latitude of each designated index point and the position of the designated index point in the image data.
Optionally, the traffic condition analysis module 504 is specifically configured to:
and determining the traffic state of each lane in the road to be detected according to the actual geographical position of each vehicle.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
and the vehicle position display module is used for displaying the actual geographic position of each vehicle on an electronic map according to a preset display frequency, wherein the preset display frequency is not more than the frame rate of the target image data.
Optionally, the road traffic state detection device according to the embodiment of the present application further includes:
the vehicle calibration module is used for marking each vehicle with a unique identifier;
and the vehicle tracking module is used for tracking each vehicle according to the identification of each vehicle.
The embodiment of the application also provides an electronic device, which comprises a processor and a memory;
the memory is used for storing computer programs;
the processor is configured to implement the following steps when executing the program stored in the memory:
acquiring target image data of a road to be detected;
determining the position of each vehicle in the target image data by using a computer vision technology;
determining the actual geographical position of each vehicle in the road to be detected according to the position of each vehicle in the target image data;
and determining the traffic state of the road to be detected according to the actual geographical position of each vehicle.
Optionally, when the processor is configured to execute the program stored in the memory, any one of the above road traffic state detection methods may be further implemented. Specifically, the electronic device is a camera or a server.
Optionally, as shown in fig. 6, the electronic device according to the embodiment of the present application further includes: a communication interface 602, and a communication bus 604, wherein the processor 601, the communication interface 602, and the memory 603 communicate with each other via the communication bus 604.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements any one of the above road traffic state detection methods.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (16)

1. A method of road traffic condition detection, the method comprising:
acquiring target image data of a road to be detected;
determining the image position of each vehicle in the target image data by using a computer vision technology;
determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data;
and determining the traffic state of the road to be detected according to the actual geographical position of each vehicle.
2. The method according to claim 1, wherein the acquiring target image data of the road to be detected comprises:
acquiring images of all road sections in a road to be detected, which are acquired by all lenses respectively;
and splicing the images of the road sections to obtain target image data of the road to be detected.
3. The method of claim 1, wherein determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data comprises:
and mapping the image positions of the vehicles in the target image data into the longitude and latitude of the vehicles in the actual road according to the corresponding relation between the positions in the predetermined image data and the longitude and latitude in the actual road to obtain the actual geographic positions of the vehicles in the road to be detected.
4. The method of claim 3, wherein the step of predetermining the correspondence of each location in the image data to the latitude in the actual road comprises:
acquiring the longitude and latitude of a plurality of specified calibration points in an actual road;
determining the position of each designated index point in the image data;
and determining the corresponding relation between each position in the image data and the longitude and latitude in the actual road according to the longitude and latitude of each designated index point and the position of the designated index point in the image data.
5. The method according to claim 1, wherein said determining the traffic status of said road to be detected according to the actual geographical position of each of said vehicles comprises:
and determining the traffic state of each lane in the road to be detected according to the actual geographic position of each vehicle.
6. The method of claim 1, wherein after determining the actual geographic location of each of the vehicles in the road to be detected according to its image location in the target image data, the method further comprises:
and displaying the actual geographical position of each vehicle on an electronic map according to a preset display frequency, wherein the preset display frequency is not more than the frame rate of the target image data.
7. The method of claim 1, further comprising:
respectively marking a unique identifier for each vehicle;
tracking each of the vehicles based on the identity of each of the vehicles.
8. A road traffic condition detection apparatus, characterized in that the apparatus comprises:
the target image acquisition module is used for acquiring target image data of a road to be detected;
the target image analysis module is used for determining the image position of each vehicle in the target image data by utilizing a computer vision technology;
the vehicle position determining module is used for determining the actual geographic position of each vehicle in the road to be detected according to the image position of each vehicle in the target image data;
and the traffic condition analysis module is used for determining the traffic state of the road to be detected according to the actual geographic position of each vehicle.
9. The apparatus of claim 8, wherein the target image acquisition module comprises:
the road section image acquisition submodule is used for acquiring images of all road sections in the road to be detected, which are acquired by all the lenses respectively;
and the target image synthesis submodule is used for splicing the images of all the road sections to obtain the target image data of the road to be detected.
10. The apparatus of claim 8, wherein the vehicle position determination module is specifically configured to:
and mapping the image positions of the vehicles in the target image data into the longitude and latitude of the vehicles in the actual road according to the corresponding relation between the positions in the predetermined image data and the longitude and latitude in the actual road to obtain the actual geographic positions of the vehicles in the road to be detected.
11. The apparatus of claim 10, further comprising:
the calibration point position acquisition module is used for acquiring the longitude and latitude of a plurality of specified calibration points in an actual road;
the image data analysis module is used for determining the position of each designated calibration point in the image data;
and the second corresponding relation determining module is used for determining the corresponding relation between each position in the image data and the longitude and latitude in the actual road according to the longitude and latitude of each designated calibration point and the position of the designated calibration point in the image data.
12. The apparatus of claim 8, wherein the traffic condition analysis module is specifically configured to:
and determining the traffic state of each lane in the road to be detected according to the actual geographic position of each vehicle.
13. The apparatus of claim 8, further comprising:
and the vehicle position display module is used for displaying the actual geographic position of each vehicle on the electronic map according to a preset display frequency, wherein the preset display frequency is not more than the frame rate of the target image data.
14. The apparatus of claim 8, further comprising:
the vehicle calibration module is used for marking each vehicle with a unique identifier;
and the vehicle tracking module is used for tracking each vehicle according to the identification of each vehicle.
15. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method steps of any of claims 1-7.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN201811605062.XA 2018-12-26 2018-12-26 Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium Pending CN111435565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811605062.XA CN111435565A (en) 2018-12-26 2018-12-26 Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811605062.XA CN111435565A (en) 2018-12-26 2018-12-26 Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111435565A true CN111435565A (en) 2020-07-21

Family

ID=71579781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811605062.XA Pending CN111435565A (en) 2018-12-26 2018-12-26 Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111435565A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767681A (en) * 2020-12-16 2021-05-07 济南博观智能科技有限公司 Traffic state detection method, device and related equipment
CN113286096A (en) * 2021-05-19 2021-08-20 中移(上海)信息通信科技有限公司 Video identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030046A (en) * 2002-06-24 2004-01-29 Nippon Telegr & Teleph Corp <Ntt> Toll collection system
CN106846846A (en) * 2017-02-02 2017-06-13 南京交通职业技术学院 A kind of robot system for dredging congestion
CN108777070A (en) * 2018-06-14 2018-11-09 浙江希仁通信技术有限公司 The road method of real-time and system sampled based on track grid and vehicle
CN108896994A (en) * 2018-05-11 2018-11-27 武汉环宇智行科技有限公司 A kind of automatic driving vehicle localization method and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004030046A (en) * 2002-06-24 2004-01-29 Nippon Telegr & Teleph Corp <Ntt> Toll collection system
CN106846846A (en) * 2017-02-02 2017-06-13 南京交通职业技术学院 A kind of robot system for dredging congestion
CN108896994A (en) * 2018-05-11 2018-11-27 武汉环宇智行科技有限公司 A kind of automatic driving vehicle localization method and equipment
CN108777070A (en) * 2018-06-14 2018-11-09 浙江希仁通信技术有限公司 The road method of real-time and system sampled based on track grid and vehicle

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767681A (en) * 2020-12-16 2021-05-07 济南博观智能科技有限公司 Traffic state detection method, device and related equipment
CN112767681B (en) * 2020-12-16 2022-08-19 济南博观智能科技有限公司 Traffic state detection method, device and related equipment
CN113286096A (en) * 2021-05-19 2021-08-20 中移(上海)信息通信科技有限公司 Video identification method and system

Similar Documents

Publication Publication Date Title
US11874119B2 (en) Traffic boundary mapping
CN105793669B (en) Vehicle position estimation system, device, method, and camera device
CN110146097B (en) Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server
CN110617821B (en) Positioning method, positioning device and storage medium
CN111275960A (en) Traffic road condition analysis method, system and camera
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
JP2008065087A (en) Apparatus for creating stationary object map
JP2018077162A (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
CN112950717A (en) Space calibration method and system
EP3994043A1 (en) Sourced lateral offset for adas or ad features
CN110018503B (en) Vehicle positioning method and positioning system
CN111275957A (en) Traffic accident information acquisition method, system and camera
CN111435565A (en) Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium
CN110764526B (en) Unmanned aerial vehicle flight control method and device
CN115457084A (en) Multi-camera target detection tracking method and device
CN112530270B (en) Mapping method and device based on region allocation
JP2022001975A (en) Map information collection device
RU186890U1 (en) VEHICLE-FREE AUTOMATED VEHICLE REGISTRATION COMPLEX
WO2021005073A1 (en) Method for aligning crowd-sourced data to provide an environmental data model of a scene
Jomrich et al. Lane Accurate Detection of Map Changes based on Low Cost Smartphone Data.
JP2021124633A (en) Map generation system and map generation program
TWI811954B (en) Positioning system and calibration method of object location
CN113822932B (en) Device positioning method, device, nonvolatile storage medium and processor
CN111275823B (en) Target associated data display method, device and system
US20230394679A1 (en) Method for measuring the speed of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200721