CN112784789B - Method, device, electronic equipment and medium for identifying traffic flow of road - Google Patents

Method, device, electronic equipment and medium for identifying traffic flow of road Download PDF

Info

Publication number
CN112784789B
CN112784789B CN202110127819.4A CN202110127819A CN112784789B CN 112784789 B CN112784789 B CN 112784789B CN 202110127819 A CN202110127819 A CN 202110127819A CN 112784789 B CN112784789 B CN 112784789B
Authority
CN
China
Prior art keywords
road
lane
queuing
image
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110127819.4A
Other languages
Chinese (zh)
Other versions
CN112784789A (en
Inventor
暴雨
梁海金
杨玲玲
李成洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110127819.4A priority Critical patent/CN112784789B/en
Publication of CN112784789A publication Critical patent/CN112784789A/en
Application granted granted Critical
Publication of CN112784789B publication Critical patent/CN112784789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure discloses a method for identifying traffic flow of a road, relates to the technical field of image processing, and particularly relates to the field of deep learning and intelligent traffic. The specific implementation scheme is as follows: acquiring attribute data of road sections at traffic light intersections and images of the road sections; determining the category of the road section according to the attribute data; according to the category of the road section and the image of the road section, respectively calculating the queuing coefficient of each lane on the road section; and identifying the traffic flow of each lane according to the queuing coefficient. The disclosure also provides a device for identifying traffic flow of a road, an electronic device and a storage medium.

Description

Method, device, electronic equipment and medium for identifying traffic flow of road
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the field of deep learning and intelligent traffic, and in particular, to a method, an apparatus, an electronic device, and a storage medium for identifying traffic flow of a road.
Background
The release of real-time road traffic flow information is an essential component in vehicle navigation. The problem of poor release timeliness and untimely information recall exists in the release of road traffic flow information based on the vehicle driving track. If wrong traffic flow information is issued, the user is provided with wrong navigation so that he is driving on the wrong route. If the wrong route cannot be corrected for a long time, the user can be caused to drive around and experience is poor, and violations and traffic accidents can be caused when serious.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and storage medium for identifying traffic flow of a road.
According to an aspect of the present disclosure, there is provided a method of identifying traffic flow of a road, including:
acquiring attribute data of road sections at traffic light intersections and images of the road sections;
determining the category of the road section according to the attribute data;
according to the category of the road section and the image of the road section, respectively calculating the queuing coefficient of each lane on the road section; and
and identifying the traffic flow of each lane according to the queuing coefficient.
According to another aspect of the present disclosure, there is provided an apparatus for identifying traffic flow of a road, including:
the acquisition module is configured to acquire attribute data of road sections at traffic light intersections and images of the road sections;
a category determination module configured to determine a category of the road segment based on the attribute data;
the coefficient calculation module is configured to calculate the queuing coefficient of each lane on the road section according to the category of the road section and the image of the road section; and
and the identification module is configured to identify the traffic flow of each lane according to the queuing coefficient.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described method.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above method.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method of identifying traffic flow of a roadway according to an embodiment of the present disclosure;
fig. 2A and 2B are schematic diagrams of application scenarios according to embodiments of the present disclosure;
fig. 2C is a schematic diagram showing a photographing angle of view of the image capturing apparatus in fig. 2A and 2B;
FIG. 3 is a schematic illustration of the calculation of queuing coefficients according to embodiments of the disclosure;
FIG. 4 is a schematic diagram of a model training process according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of an apparatus for identifying traffic flow of a link according to another embodiment of the present disclosure; and
fig. 6 is a block diagram of an electronic device for implementing a method of identifying traffic flow of a link in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In vehicle navigation, the distribution of real-time road traffic flow information based on the vehicle travel trajectory is affected by a number of subjective and objective factors. Such as driving on highways, waiting at traffic light intersections, passing through tollgates or through tunnels, user stopping abnormally during driving, user closing navigation, etc. In addition, the stopping distance and stopping times of the user when waiting at the traffic light intersection can also cause great interference to the prediction of traffic flow. The embodiment of the disclosure provides a method for identifying traffic flow of a road, which aims to eliminate the influence of various subjective and objective factors and improve the accuracy of real-time road traffic flow information release.
Fig. 1 is a flow chart of a method 100 of identifying traffic flow of a road according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the steps of:
in step S110, attribute data of a road segment at a traffic light intersection and an image of the road segment are acquired.
In step S120, a category of the road segment is determined from the attribute data.
In step S130, a queuing coefficient for each lane on the road section is calculated based on the class of the road section and the image of the road section, respectively.
In step S140, the traffic flow of each lane is identified according to the queuing coefficient.
Because traffic conditions at traffic light intersections are relatively complex, in embodiments of the present disclosure, road topology at various traffic light intersections is considered. Specifically, in step S110, the acquired attribute data of the road segment includes spatial attribute data and temporal attribute data.
According to an embodiment, the spatial attribute data may include road class, road traffic capacity, road speed limit, road width, and number of lanes. In particular embodiments, the road class may be data indicating the type of road segment. For example, the road class data includes types of highways, urban traffic roads, inter-town traffic roads, intra-cell roads, and the like. The road traffic capacity may be data indicating the magnitude of the traffic flow of the road segment over a period of time in a normal case. The road speed limit data is a limit requirement for the speed of the vehicle traveling on the road segment. The road width and the number of lanes are both parameter data representing the structure of a road section. The above-mentioned spatial attribute data may be obtained by querying a database of related departments, or by related identifications on road segments or by actual measurements of road segments. The embodiment of the disclosure does not limit a specific data acquisition method.
According to an embodiment, the time attribute data may indicate a current time period for the method to execute. The magnitude of the traffic flow of the road segments may be different during different time periods. For example, traffic flows in various road segments are generally large during early rush hour hours. And during non-rush hour (e.g., midday), traffic flow for various road segments is reduced relative to the rush hour. In both cases, if the same recognition criteria is used to determine whether the road segments are clear or congested, it may happen that all road segments are congested during the early rush hour of the work and the navigation route cannot be obtained. Thus, in embodiments of the present disclosure, a day is divided into a plurality of time periods, and traffic flow is identified within each time period.
The image of the road segment may most intuitively reflect the relevant information of the vehicle on the road segment. Therefore, in step S110, an image on the road section is also acquired, and the traffic flow of the road section is identified by analyzing the image of the road section. In an embodiment of the present disclosure, an image on a road section is acquired by an image capturing apparatus provided at a traffic light intersection, that is, an image on a road section is acquired from a high place, and related information of a vehicle waiting for a traffic light on the road section, such as queuing information of the vehicle, is contained in the acquired image. According to an embodiment, an image on a road section may be acquired using a monitoring camera installed near a traffic light in existing urban traffic, but the embodiment of the present disclosure is not limited thereto, and other image capturing devices may be employed to acquire an image on a road section. For example, an image capturing device dedicated to traffic flow identification may be installed at a traffic light.
Next, in step S120, the category of the road segment is determined from the acquired attribute data of the road segment. The same identification method may be used to determine traffic flow of road segments for road segments belonging to the same category. In an embodiment of the present disclosure, the category of the road section is comprehensively considered based on the above-described spatial attribute data and temporal attribute data. For example, for the same road segment, although its spatial attribute data remains unchanged in any case, the traffic flow carried by the road segment may be different during different time periods of the day, possibly requiring different identifications. For example, the road section has a high possibility of long-distance queuing of vehicles in the period from 6 to 10 points, and the road section has a low possibility of long-distance queuing of vehicles in the period from 10 to 14 points, so that the road sections in different time periods need to be divided into different categories, namely, the road section from 6 to 10 points and the road section from 10 to 14 points are divided into different categories so as to adopt different recognition standards for traffic flow recognition. For another example, if one of the two road segments is a road segment having four lanes and the other road segment is a road segment having two lanes, since the traffic throughput of the road segment having four lanes is greater than that of the road segment having two lanes, the road segment having four lanes of 6 to 10 points and the road segment having two lanes of 10 to 14 points may be classified into the same category. It is to be readily understood that the above examples are merely illustrative of the present disclosure and are not to be construed as limiting the present disclosure.
Next, in step S130, the queuing coefficients of the vehicles for each lane on the road section are calculated based on the image. In an embodiment of the present disclosure, traffic flow for road segments is defined according to a queuing distance of a vehicle. More specifically, the traffic flow of each lane on the road section is determined according to the queuing distance of the vehicles of each lane on the road section, so that the identification of the traffic flow of each lane and the release of information are realized.
According to an embodiment, the traffic flow of a road segment may be defined according to a relationship between the queuing distance of a vehicle and the release distance of a traffic light. For example, a case where the queuing distance is less than or equal to the one-time release distance of the traffic light is defined as road clear. And defining the situation that the queuing distance is larger than the one-time release distance of the traffic light and smaller than or equal to the two-time release distance of the traffic light as slow running. The situation that the queuing distance is larger than the twice-passing distance of the traffic lights is defined as road congestion.
According to an embodiment of the present disclosure, the queuing coefficient calculated based on the image may characterize the queuing situation of the vehicle while waiting for the traffic light, and thus, in step S140, the traffic flow of the road section may be identified based on the queuing coefficient.
According to the method for identifying traffic flow of a road, the traffic flow of the road section can be identified based on the acquired image of the road section, and because the image is acquired in real time, compared with the traditional method for acquiring navigation information based on the running track of a vehicle, the method for identifying traffic flow of the road according to the embodiment of the disclosure is more time-efficient, so that erroneous navigation caused by untimely release of traffic flow information is avoided. In addition, the image capturing device arranged at the traffic light can provide stable image data, so that the image-based traffic flow identification method is stably executed, the problems of abnormal running track caused by abnormal stopping or closing navigation of a user and inaccurate traffic flow identification caused by abnormal running track can be effectively avoided, and the accuracy of traffic flow identification can be remarkably improved.
Fig. 2A and 2B are schematic diagrams of application scenarios according to embodiments of the present disclosure. Fig. 2A shows a case of a traffic light intersection of an urban traffic road with five lanes, and fig. 2B shows a case of a two-way single-lane inter-town traffic road. As shown in fig. 2A, the traffic light is set at a standard height, and an image capturing apparatus installed near the traffic light acquires an image on a road section from a position higher from the ground. As shown in fig. 2B, the traffic light is provided in a non-standard form, the height of the traffic light is low, and an image capturing device installed near the traffic light acquires an image on a road section from a position low from the ground. When an image of a road segment is taken with an image capturing device from high down, the view angle of the take will result in the same vehicle queuing distance presenting different distance information in different images. According to the embodiments of the present disclosure, by classifying the road segments shown in fig. 2A and the road segments shown in fig. 2B and processing the images of the road segments shown in fig. 2A and 2B based on different classifications, it is possible to eliminate the influence of the difference in photographing angles due to the different settings of the image capturing apparatuses.
Fig. 2C is a schematic diagram showing a photographing angle of view of the image capturing apparatus in fig. 2A and 2B. As shown in fig. 2C, the image capturing apparatuses (e.g., cameras) C1 and C2 are apparatuses of the same model, and in the case where the mounting heights of the cameras C1 and C2 are different, the distance information presented for the same queuing distance is different in the images they each capture. As shown in fig. 2C, the queuing distance of the vehicle acquired from the image captured by the camera C1 is S1, and the length of the acquired road section is L1. The queuing distance of the vehicle acquired from the image photographed by the camera C2 is S2, and the length of the acquired road section is L2. As can be seen from fig. 2C, the queuing distance S1 is greater than the queuing distance S2, and the length L1 of the road section is smaller than the length L2. If the classification processing is not performed on the road segments shown in fig. 2A and 2B, different results will be calculated according to the acquired information, thereby causing traffic flow recognition errors.
According to an embodiment, the road segments are classified using the first deep-learning models. Specifically, determining the category of the road segment according to the attribute data includes: and constructing a feature vector according to the road grade, the road traffic capacity, the road speed limit, the road width, the number of lanes and the time period, and inputting the feature vector into a first deep learning model so as to classify the road section by using the first deep learning model to determine the category of the road section.
In a specific embodiment, the first deep-learning models may employ LDA (Liner Discriminant Analysis, linear discriminant analysis) models. The LDA model is a supervised learning model. The basic idea is to project a high-dimensional pattern sample into an optimal discrimination vector space to achieve the effects of extracting classification information and compressing feature space dimensions, and after projection, ensuring that the pattern sample has the largest inter-class distance and the smallest intra-class distance in a new subspace, namely, the pattern has the optimal separability in the space. According to an embodiment, the first deep learning model is obtained by training the LDA model. A 6-dimensional feature vector is constructed with road class, road traffic capacity, road speed limit, road width, number of lanes, and time period as input to a first deep learning model that outputs a class to which a certain road section belongs, for example, numeral 3 indicating a third class. According to an embodiment, in training the LDA model, the number of categories to be output may be changed by adjusting model parameters.
According to an embodiment, after the classification of the category to which the different road segments belong by the first deep learning model, a second deep learning model to be subjected to image analysis is selected for the different road segment categories. Specifically, according to the category of the road section and the image of the road section, calculating the queuing coefficient of each lane on the road section includes: and determining a second deep learning model according to the category of the road section, and inputting the image into the second deep learning model so as to calculate the queuing coefficient of each lane on the road section by using the second deep learning model.
In particular embodiments, the second deep learning model may employ a semantic segmentation model. The semantic segmentation model may distinguish different objects in the image from each other by training the model with image samples with pixel level segmentation as training data. In an embodiment of the present disclosure, for each category of road segments determined using the first deep learning model, a second deep learning model corresponding thereto is created. The second deep learning models for different road segment categories correspond to the same traffic flow identification criteria. In this way, under the condition that different distance information is presented in different images by the same queuing distance in practice, the recognition result conforming to the practical situation can be obtained according to different recognition standards.
In an embodiment of the present disclosure, a ratio of a queuing distance of a vehicle to a lane length is determined as a queuing coefficient. The larger the queuing coefficient, the larger the queuing distance of the vehicle, i.e. the more vehicles are queued. The smaller the queuing coefficient, the smaller the queuing distance of the vehicle, i.e. the fewer vehicles are queued. In the case of recognizing queuing information based on an image, the queuing coefficient may be calculated by calculating a duty ratio of the vehicle with respect to the lane.
According to an embodiment, the second deep learning model receives an image of the road segment classified by the first deep learning model and outputs the number of pixels each contained by the plurality of specified objects in the image. Specifically, calculating the queuing coefficient of each lane on the road section using the second deep learning model includes: and determining the first pixel number included in the lane image of each lane in the image and the second pixel number included in the vehicle image of the vehicle on each lane in the image by using a second deep learning model, and calculating the queuing coefficient of the lane according to the ratio of the second pixel number to the first pixel number.
Fig. 3 is a schematic diagram of the calculation of queuing coefficients according to embodiments of the disclosure. As shown in fig. 3, the specified object in the image includes a lane 1 and vehicles 1 that are lined up on the lane 1 (as shown in fig. 3, the vehicles 1 include two vehicles), and also includes a lane 2 and vehicles 2 that are lined up on the lane 2 (as shown in fig. 3, the vehicles 2 include one vehicle). The number of pixels N1 included in the lane 1, the number of pixels M1 included in the vehicle 1, the number of pixels N2 included in the lane 2, and the number of pixels M2 included in the vehicle 2 in the image can be obtained by using the second deep learning model, respectively. The queuing coefficient for lane 1 is calculated as M1/N1 and the queuing coefficient for lane 2 is calculated as M2/N2.
According to the method of the embodiment of the disclosure, by classifying for different road segments and calculating the queuing coefficient for each classification respectively, the recognition error caused by the different shooting angles of the image capturing device is eliminated. According to the method of the embodiment of the disclosure, the vehicle queuing information of each lane can be acquired based on the image, so that the traffic flow is identified for each lane.
According to an embodiment, identifying traffic flow for each lane according to a queuing coefficient includes: identifying traffic flow of the lane as clear if the queuing coefficient is less than or equal to a first threshold; identifying traffic flow of the lane as slow travel if the queuing coefficient is greater than a first threshold and less than or equal to a second threshold; and identifying traffic flow of the lane as congestion if the queuing coefficient is greater than a second threshold. In the above identification process, the first threshold value and the second threshold value are determined separately for the categories of different road segments.
According to an embodiment, the procedure of determining the first threshold value and the second threshold value for the category of the different road segments is as follows. The method comprises the steps of respectively obtaining attribute data of road segments related to each image sample data in a plurality of image sample data, and classifying the road segments by utilizing a first deep learning model according to the attribute data so as to obtain a plurality of category sets comprising different numbers of road segments. For each category set, calculating the queuing coefficient of each lane on the road section according to the image of the road section, and determining the traffic flow of the lane according to the actual queuing distance (not the distance information acquired from the image) of each lane related by the image and the releasing distance of the traffic lights, namely determining whether the lane is clear, running at a low speed or is congested. And obtaining the corresponding relation between the queuing coefficients and the actual traffic flow of the lanes. The first set of queuing coefficients may be obtained by aggregating for each case, i.e. for the case where the lane is clear. Aggregating the cases where the lanes are traveling at a slow speed may result in a second set comprising a plurality of queuing systems. For the first set, an average value of a plurality of queuing coefficients is calculated, and the average value is taken as a first threshold value. For the second set, an average of the plurality of queuing coefficients is calculated and a tie value is taken as a second threshold. In other embodiments, the aggregation may be performed such that the lanes are congested, and a third set comprising a plurality of queuing coefficients may be obtained. And for the third set, calculating an average of the plurality of queuing coefficients, and re-determining the second threshold value based on the obtained average and the second threshold value determined from the second set. In other embodiments, a maximum value, a minimum value, a median value, etc. of the plurality of queuing coefficients may be calculated for the first set and the second set, respectively, and the resulting maximum value, minimum value, or median value may be determined as the first threshold or the second threshold. Embodiments of the present disclosure are not limited thereto.
According to an embodiment, a first threshold and a second threshold for each road segment class may be determined during a model training phase and a dictionary may be queried based on the determined first and second threshold components queuing coefficients, such that the traffic flow of the road sheet is identified based on the model by looking up the query dictionary to obtain the first and second thresholds.
Fig. 4 is a schematic diagram of a training process of a second deep learning model according to an embodiment of the present disclosure. As shown in fig. 4, a plurality of first sample data 402 is first classified (as shown in operation (1) in fig. 4) using a first deep learning model 401 to determine a category of each first sample data. Then, for the first sample data of each category, the first sample data is selected from the categories based on the attribute data of the road section to which the first sample data relates (as shown in operation (2) in fig. 4) as the second sample data 403. Next, pixel-level labeling is performed on the second sample data (as shown in operation (3) in fig. 4). Then, training of the second deep learning model is performed using the labeled second sample data as training data (as shown in operation (4) in fig. 4), resulting in second deep learning models 404 for each category, respectively.
According to an embodiment, the operation (1) is the same as the classification prediction process using the first deep learning model, and will not be described here.
According to an embodiment, in operation (2), data within one month may be selected from the classified first sample data as the second sample data. In addition, in the process of selecting training samples, sample balance is ensured, namely the number of samples of traffic light intersections of different categories is balanced, and the number of samples of the traffic light intersections of the same category in different time periods is balanced. In the selection process, a road segment-time period may be used as a selection key value.
According to an embodiment, in operation (3), the second sample data is labeled at the pixel level. Specifically, heterogeneous labeling is performed for different lanes, and heterogeneous labeling is performed for vehicles in different lanes. For example, for two lanes, there are 5 categories of labels required, namely lane 1, lane 2, vehicle 1, vehicle 2 and others.
According to an embodiment, in operation (4), the sampling range of the time domain is fixed during the model training phase. In some embodiments, separate samplings may also be performed for different provinces. The method is characterized in that a deep learning model based on image segmentation is adopted, a large number of marked samples are used for model training, K-fold cross validation is used for improving the generalization capability of the model, and meanwhile the problem of sample imbalance is solved by a weighted loss function and an oversampling and undersampling combined method. The over fitting can be effectively prevented, and the off-line model is obtained by integrating ten thousand-level samples.
Because different lanes in front of the traffic light intersection have different meanings, the traffic light intersection can be divided into a straight lane and a steering lane. When the existing navigation APP predicts different steering lanes, the traffic flow is generally calculated by adopting the converted track. Because of fewer diverted tracks, the number of tracks is increased by filling the diverted tracks after steering prediction is performed on the non-diverted tracks through steering probability in most cases. Noise is brought in the process, the prediction error is large, and the calculation of traffic flow is influenced. According to the method, the image of the steering lane can be directly identified, so that the accuracy of identifying the traffic flow of the steering lane can be improved according to the embodiment of the disclosure.
The scheme aims at utilizing camera information placed at a high position, utilizing an image segmentation algorithm to construct a mapping relation between road traffic flow and lane characteristics, and obtaining the prediction result of the road traffic flow by obtaining the characteristics of different lanes, so as to provide more accurate and timely navigation information for users. The scheme can intuitively see the queuing situation of vehicles in front of the lamp and is not influenced by track quality and quantity. Meanwhile, the situations of different lanes can be well distinguished, and noise caused by the fact that the track is turned can be avoided.
According to the embodiment of the disclosure, the influence caused by the driving track can be avoided, the problems of inaccurate road condition release and low congestion recall rate are solved, and meanwhile, the timeliness problem is well solved. The recall rate of traffic information is improved aiming at traffic light scenes, the reasonability of users in road selection is guaranteed, the traveling of the users is scientifically guided, the misleading probability of the users is reduced, traveling time is saved, and the perception experience of the users is continuously improved.
Fig. 5 illustrates a block diagram of an apparatus 500 for identifying traffic flow of a link in accordance with an embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 for identifying traffic flow of a road includes an acquisition module 510, a category determination module 520, a coefficient calculation module 530, and an identification module 540.
According to an embodiment, the acquisition module 510 is configured to acquire attribute data of road segments at traffic light intersections and images of the road segments. The category determination module 520 is configured to determine a category of the road segment based on the attribute data. The coefficient calculation module 530 is configured to calculate a queuing coefficient for each lane on a road segment based on the class of the road segment and the image of the road segment, respectively. The identification module 540 is configured to identify traffic flow for each lane based on the queuing coefficients.
The specific operations of the above functional modules may be obtained by referring to the operation steps of the method 100 for identifying traffic flow of a road in the foregoing embodiments, which are not described herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, a method of identifying traffic flow of a road. For example, in some embodiments, the method of identifying traffic flow for a link may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the above-described method of identifying traffic flow of a link may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method of identifying traffic flow of a road in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (11)

1. A method of identifying traffic flow for a roadway, comprising:
acquiring attribute data of road sections at traffic light intersections and images of the road sections;
determining the category of the road section according to the attribute data;
according to the category of the road section and the image of the road section, respectively calculating the queuing coefficient of each lane on the road section; and
identifying the traffic flow of each lane according to the queuing coefficient;
the calculating the queuing coefficient of each lane on the road section according to the category of the road section and the image of the road section comprises the following steps:
determining a second deep learning model according to the category of the road section; the second deep learning model comprises a semantic segmentation model; and
inputting the image into the second deep learning model to calculate a queuing coefficient for each lane on the road segment using the second deep learning model, comprising: determining a first pixel number included in a lane image of each lane in the image and a second pixel number included in a vehicle image of a vehicle on each lane in the image by using the second deep learning model; and calculating the queuing coefficient of the lane according to the ratio of the second pixel number to the first pixel number.
2. The method of claim 1, wherein the attribute data comprises spatial attribute data including road class, road traffic capacity, road speed limit, road width, and number of lanes, and temporal attribute data indicating a current time period.
3. The method of claim 2, wherein the determining the category of the road segment from the attribute data comprises:
constructing a feature vector according to the road grade, the road traffic capacity, the road speed limit, the road width, the number of lanes and the time period; and
the feature vector is input to a first deep learning model to classify a road segment with the first deep learning model to determine a class of the road segment.
4. The method of claim 3, wherein the first deep learning model comprises a linear discriminant analysis model.
5. The method of claim 1, wherein the identifying traffic flow for each lane according to the queuing coefficients comprises:
identifying traffic flow of the lane as clear if the queuing coefficient is less than or equal to a first threshold;
identifying traffic flow of the lane as slow travel if the queuing coefficient is greater than the first threshold and less than or equal to a second threshold; and
and identifying traffic flow of the lane as congestion if the queuing coefficient is greater than the second threshold.
6. The method of claim 5, wherein the identifying traffic flow for each lane according to the queuing coefficients further comprises:
the first threshold and the second threshold are determined according to the category of the road section related to the queuing coefficient.
7. The method of claim 1, further comprising: an image of the road segment is acquired with an image capturing device arranged at the traffic light.
8. A method according to claim 3, further comprising:
classifying the plurality of first sample data with the first deep learning model to determine a class of each first sample data;
for first sample data of each category, selecting the first sample data from the categories according to attribute data of road segments related to the first sample data as second sample data;
performing pixel-level labeling on the second sample data; and
training of the second deep learning model is performed using the labeled second sample data as training data.
9. An apparatus for identifying traffic flow of a roadway, comprising:
the acquisition module is configured to acquire attribute data of road sections at traffic light intersections and images of the road sections;
a category determination module configured to determine a category of the road segment based on the attribute data;
the coefficient calculation module is configured to calculate the queuing coefficient of each lane on the road section according to the category of the road section and the image of the road section; and
the identifying module is configured to identify the traffic flow of each lane according to the queuing coefficient;
wherein the coefficient calculation module is further configured to:
determining a second deep learning model according to the category of the road section; the second deep learning model comprises a semantic segmentation model; and
inputting the image into the second deep learning model to calculate a queuing coefficient for each lane on the road segment using the second deep learning model, comprising: determining a first pixel number included in a lane image of each lane in the image and a second pixel number included in a vehicle image of a vehicle on each lane in the image by using the second deep learning model; and calculating the queuing coefficient of the lane according to the ratio of the second pixel number to the first pixel number.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202110127819.4A 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road Active CN112784789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110127819.4A CN112784789B (en) 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110127819.4A CN112784789B (en) 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road

Publications (2)

Publication Number Publication Date
CN112784789A CN112784789A (en) 2021-05-11
CN112784789B true CN112784789B (en) 2023-08-18

Family

ID=75759908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110127819.4A Active CN112784789B (en) 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road

Country Status (1)

Country Link
CN (1) CN112784789B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09190533A (en) * 1996-01-11 1997-07-22 Mitsubishi Heavy Ind Ltd Vehicle detecting device
CN105070042A (en) * 2015-07-22 2015-11-18 济南市市政工程设计研究院(集团)有限责任公司 Modeling method of traffic prediction
CN106097726A (en) * 2016-08-23 2016-11-09 苏州科达科技股份有限公司 The detection determination in region, traffic information detection method and device
CN109714421A (en) * 2018-12-28 2019-05-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent network based on bus or train route collaboration joins automobilism system
CN110364008A (en) * 2019-08-16 2019-10-22 腾讯科技(深圳)有限公司 Road conditions determine method, apparatus, computer equipment and storage medium
WO2020107523A1 (en) * 2018-11-27 2020-06-04 上海芯仑光电科技有限公司 Vehicle lane line detection method, vehicle, and computing device
CN111460921A (en) * 2020-03-13 2020-07-28 华南理工大学 Lane line detection method based on multitask semantic segmentation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10818165B2 (en) * 2018-04-19 2020-10-27 Here Global B.V. Method, apparatus, and system for propagating learned traffic sign data in a road network
CN110889328B (en) * 2019-10-21 2023-05-30 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09190533A (en) * 1996-01-11 1997-07-22 Mitsubishi Heavy Ind Ltd Vehicle detecting device
CN105070042A (en) * 2015-07-22 2015-11-18 济南市市政工程设计研究院(集团)有限责任公司 Modeling method of traffic prediction
CN106097726A (en) * 2016-08-23 2016-11-09 苏州科达科技股份有限公司 The detection determination in region, traffic information detection method and device
WO2020107523A1 (en) * 2018-11-27 2020-06-04 上海芯仑光电科技有限公司 Vehicle lane line detection method, vehicle, and computing device
CN109714421A (en) * 2018-12-28 2019-05-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent network based on bus or train route collaboration joins automobilism system
CN110364008A (en) * 2019-08-16 2019-10-22 腾讯科技(深圳)有限公司 Road conditions determine method, apparatus, computer equipment and storage medium
CN111460921A (en) * 2020-03-13 2020-07-28 华南理工大学 Lane line detection method based on multitask semantic segmentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Cooperative Traffic Light Control Based on Semi-real-time Processing;Qin Zhu et al;《Journal of Automation and Control Engineering》;第4卷(第1期);全文 *

Also Published As

Publication number Publication date
CN112784789A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112634611B (en) Method, device, equipment and storage medium for identifying road conditions
CN113066285B (en) Road condition information determining method and device, electronic equipment and storage medium
JP7292355B2 (en) Methods and apparatus for identifying vehicle alignment information, electronics, roadside equipment, cloud control platforms, storage media and computer program products
EP3951741B1 (en) Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform
CN114120650B (en) Method and device for generating test results
CN112818792A (en) Lane line detection method, lane line detection device, electronic device, and computer storage medium
KR20220146670A (en) Traffic anomaly detection methods, devices, devices, storage media and programs
CN115359471A (en) Image processing and joint detection model training method, device, equipment and storage medium
CN112528927A (en) Confidence determination method based on trajectory analysis, roadside equipment and cloud control platform
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN112447060A (en) Method and device for recognizing lane and computing equipment
CN112883236A (en) Map updating method, map updating device, electronic equipment and storage medium
CN112784789B (en) Method, device, electronic equipment and medium for identifying traffic flow of road
CN116794619A (en) Radar debugging processing method and device, electronic equipment and storage medium
US20230065341A1 (en) Road data monitoring method and apparatus, electronic device and storage medium
CN113837268B (en) Method, device, equipment and medium for determining track point state
CN112818972B (en) Method and device for detecting interest point image, electronic equipment and storage medium
CN115526837A (en) Abnormal driving detection method and device, electronic equipment and medium
CN114998387A (en) Object distance monitoring method and device, electronic equipment and storage medium
CN113807209A (en) Parking space detection method and device, electronic equipment and storage medium
CN113902898A (en) Training of target detection model, target detection method, device, equipment and medium
CN112861701A (en) Illegal parking identification method and device, electronic equipment and computer readable medium
CN112926630A (en) Route planning method, route planning device, electronic equipment and computer readable medium
CN115662167B (en) Automatic driving map construction method, automatic driving method and related devices
CN117789460A (en) Road condition prediction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant