CN112784789A - Method, apparatus, electronic device, and medium for recognizing traffic flow of road - Google Patents

Method, apparatus, electronic device, and medium for recognizing traffic flow of road Download PDF

Info

Publication number
CN112784789A
CN112784789A CN202110127819.4A CN202110127819A CN112784789A CN 112784789 A CN112784789 A CN 112784789A CN 202110127819 A CN202110127819 A CN 202110127819A CN 112784789 A CN112784789 A CN 112784789A
Authority
CN
China
Prior art keywords
road
lane
queuing
coefficient
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110127819.4A
Other languages
Chinese (zh)
Other versions
CN112784789B (en
Inventor
暴雨
梁海金
杨玲玲
李成洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110127819.4A priority Critical patent/CN112784789B/en
Publication of CN112784789A publication Critical patent/CN112784789A/en
Application granted granted Critical
Publication of CN112784789B publication Critical patent/CN112784789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The utility model discloses a method for identifying the traffic flow of a road, which relates to the technical field of image processing, in particular to the fields of deep learning and intelligent traffic. The specific implementation scheme is as follows: acquiring attribute data of road sections at traffic light intersections and images of the road sections; determining the type of the road section according to the attribute data; respectively calculating the queuing coefficient of each lane on the road section according to the type of the road section and the image of the road section; and identifying the traffic flow of each lane according to the queuing coefficient. The present disclosure also provides an apparatus, an electronic device, and a storage medium for recognizing traffic flow of a road.

Description

Method, apparatus, electronic device, and medium for recognizing traffic flow of road
Technical Field
The present disclosure relates to the field of image processing technologies, particularly to the field of deep learning and intelligent transportation, and in particular, to a method, an apparatus, an electronic device, and a storage medium for recognizing traffic flow of a road.
Background
The distribution of real-time road traffic flow information is an important component indispensable in vehicle navigation. The issue of road traffic flow information based on vehicle travel track has the problems of poor issue timeliness and untimely information recall. If erroneous traffic flow information is issued, the user may be provided with erroneous navigation such that he travels on an erroneous route. If the wrong route cannot be corrected for a long time, the user can detour and experience is poor, and violation of regulations and traffic accidents can be caused in serious conditions.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for recognizing traffic flow of a road.
According to an aspect of the present disclosure, there is provided a method of identifying a traffic flow of a road, including:
acquiring attribute data of a road section at a traffic light intersection and an image of the road section;
determining the category of the road section according to the attribute data;
respectively calculating a queuing coefficient of each lane on the road section according to the type of the road section and the image of the road section; and
and identifying the traffic flow of each lane according to the queuing coefficient.
According to another aspect of the present disclosure, there is provided an apparatus for recognizing traffic flow of a road, including:
the acquisition module is configured to acquire attribute data of a road section at a traffic light intersection and an image of the road section;
a category determination module configured to determine a category of the road segment according to the attribute data;
the coefficient calculation module is configured to calculate a queuing coefficient of each lane on the road section according to the type of the road section and the image of the road section; and
and the identification module is configured to identify the traffic flow of each lane according to the queuing coefficient.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the above method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described method.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a method of identifying traffic flow of roads according to an embodiment of the present disclosure;
fig. 2A and 2B are schematic diagrams of application scenarios to which embodiments of the present disclosure relate;
fig. 2C is a schematic diagram showing a photographing angle of view of the image capturing apparatus in fig. 2A and 2B;
FIG. 3 is a schematic diagram of the calculation of queuing coefficients according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a model training process according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an apparatus for identifying a traffic flow of a road according to another embodiment of the present disclosure; and
fig. 6 is a block diagram of an electronic device for implementing a method of identifying traffic flow of roads according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In vehicle navigation, the distribution of real-time road traffic flow information based on vehicle travel trajectories is affected by a number of subjective and objective factors. Such as driving on a highway, waiting at a traffic light intersection, passing a toll booth or through a tunnel, a user stopping abnormally during driving, and a user turning off navigation, etc. In addition, the stopping distance and stopping times of the user waiting at the traffic light intersection can cause great interference to the prediction of the traffic flow. The embodiment of the disclosure provides a method for identifying road traffic flow, aiming at eliminating the influence of various subjective and objective factors and improving the accuracy of real-time road traffic flow information publishing.
Fig. 1 is a flowchart of a method 100 of identifying traffic flow for roads according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the steps of:
in step S110, attribute data of a road segment at a traffic light intersection and an image of the road segment are acquired.
In step S120, a category of the road segment is determined from the attribute data.
In step S130, a queuing coefficient for each lane on the road section is calculated, respectively, based on the category of the road section and the image of the road section.
In step S140, the traffic flow of each lane is identified according to the queuing coefficient.
Since the traffic conditions at the traffic light intersection are relatively complex, in the embodiment of the present disclosure, road topology forms at various traffic light intersections are considered. Specifically, in step S110, the acquired attribute data of the road segment includes spatial attribute data and temporal attribute data.
According to an embodiment, the spatial attribute data may include road grade, road capacity, road speed limit, road width and number of lanes. In particular embodiments, the road grade may be data indicating a type of road segment. For example, the road grade data includes types of highways, urban traffic roads, inter-town traffic roads, intra-cell roads, and the like. The road traffic capacity may be data indicating the amount of traffic flow of a road segment in a certain time under normal circumstances. The road speed limit data is a limit request for the vehicle speed of the vehicle traveling on the road section. Both the road width and the number of lanes are parameter data representing the structure of a road section. The above spatial attribute data may be obtained by querying a database of a relevant department, or may be obtained by a relevant identifier on a road section or by actual measurement of the road section. The embodiment of the present disclosure does not limit the specific data acquisition method.
According to an embodiment, the time attribute data may indicate a current time period of execution of the method. The size of the traffic flow of the road segment may be different in different time periods. For example, traffic flow is generally high for various road segments during morning rush hour work. During non-working morning rush hour (e.g., noon hour), the traffic flow of various road segments is reduced relative to working morning rush hour. In both cases, if it is determined whether a road segment is clear or congested with the same identification criterion, it may occur that all road segments are congested during an early peak period of work, and a navigation route cannot be acquired. Therefore, in the embodiment of the present disclosure, a day is divided into a plurality of time periods, and the traffic flow is identified in each time period.
The image about the road segment may most intuitively reflect the relevant information of the vehicles on the road segment. Therefore, in step S110, images about road segments are also acquired, and the traffic flow of the road segments is identified by analyzing the images of the road segments. In the embodiment of the present disclosure, an image about a road section is acquired by an image capturing device provided at a traffic light intersection, that is, an image about a road section is acquired from a high place, and related information of vehicles waiting for a traffic light on the road section, for example, queuing information of the vehicles, is included in the acquired image. According to the embodiment, the image about the road section may be acquired using the monitoring camera installed near the traffic light in the existing urban traffic, but the disclosed embodiments are not limited thereto, and other image capturing apparatuses may be employed to acquire the image about the road section. For example, an image capture device dedicated to traffic flow identification may be installed at a traffic light.
Next, in step S120, the category of the road segment is determined from the acquired attribute data of the road segment. For road segments belonging to the same category, the same identification method may be used to determine the traffic flow of the road segments. In the embodiment of the present disclosure, the category of the road segment is comprehensively considered based on the above-described spatial attribute data and temporal attribute data. For example, for the same road segment, although the spatial attribute data thereof remains unchanged in any case, the traffic flow carried by the road segment is different in different time periods of the day, and different identification may be required. For example, the road segment has a high possibility of long-distance in-line waiting of vehicles in the period from 6 to 10 points, and the road segment has a low possibility of long-distance in-line waiting of vehicles in the period from 10 to 14 points, so that the road segments in different time periods need to be divided into different categories, that is, the road segment from 6 to 10 points and the road segment from 10 to 14 points need to be divided into different categories, so as to adopt different identification standards for identifying the traffic flow. For another example, if one of the two road segments is a road segment having four lanes and the other is a road segment having two lanes, since traffic throughput of the road segment having four lanes is greater than traffic throughput of the road segment having two lanes, the road segment having four lanes of 6 to 10 points and the road segment having two lanes of 10 to 14 points may be classified into the same category. It is to be understood that the above examples are illustrative of the present disclosure only and are not to be construed as limiting the present disclosure.
Next, in step S130, the queuing coefficient of the vehicle for each lane on the road section is calculated based on the image. In an embodiment of the present disclosure, a traffic flow for a road segment is defined according to a queuing distance of vehicles. More specifically, the traffic flow of each lane on the road section is determined according to the queuing distance of the vehicles of each lane on the road section, thereby realizing the identification of the traffic flow for each lane and the distribution of information.
According to an embodiment, the traffic flow of a road section may be defined according to a relationship between a queuing distance of vehicles and a passing distance of traffic lights. For example, a case where the queuing distance is less than or equal to the one-time traffic light passing distance is defined as road clear. And defining the condition that the queuing distance is greater than the one-time clearance distance of the traffic lights and is less than or equal to the two-time clearance distance of the traffic lights as slow running. And defining the condition that the queuing distance is greater than the twice passing distance of the traffic lights as road congestion.
According to an embodiment of the present disclosure, the queuing coefficient calculated based on the image may represent a queuing situation of the vehicle while waiting for the traffic light, and thus, in step S140, the traffic flow of the road section may be identified based on the queuing coefficient.
According to the method for identifying the traffic flow of the road, the traffic flow of the road section can be identified based on the acquired image of the road section, and because the image is acquired in real time, compared with the traditional method for acquiring the navigation information based on the driving track of the vehicle, the method for identifying the traffic flow of the road according to the embodiment of the disclosure is more time-efficient, so that wrong navigation caused by untimely issuing of the traffic flow information is avoided. In addition, stable image data can be provided through an image capturing device arranged at a traffic light, so that the image-based traffic flow identification method can be stably executed, the problems of abnormal driving tracks caused by abnormal parking or closed navigation of a user and inaccurate traffic flow identification caused by the abnormal driving tracks can be effectively avoided, and the accuracy of traffic flow identification can be remarkably improved.
Fig. 2A and 2B are schematic diagrams of application scenarios to which embodiments of the present disclosure relate. In which fig. 2A shows the case of a traffic light intersection of an urban traffic road having five lanes, and fig. 2B shows the case of a two-way one-lane inter-town traffic road. As shown in fig. 2A, traffic lights are arranged at a standard height, and an image capturing apparatus installed near the traffic lights acquires an image about a road section from a position higher than the ground. As shown in fig. 2B, the traffic light is set in a non-standard form, the height of the traffic light is low, and the image capturing apparatus installed near the traffic light acquires an image about a road section from a position lower from the ground. When an image of a road segment is taken from high down with an image capture device, the camera angle of view will result in the same vehicle queue distance presenting different distance information in different images. According to the embodiment of the present disclosure, by classifying the road segment shown in fig. 2A and the road segment shown in fig. 2B and processing the images of the road segments shown in fig. 2A and 2B based on different classifications, it is possible to eliminate the influence of different shooting perspectives due to different settings of the image capturing apparatus.
Fig. 2C is a schematic diagram illustrating a photographing angle of view of the image capturing apparatus in fig. 2A and 2B. As shown in fig. 2C, the image capturing apparatuses (e.g., cameras) C1 and C2 are the same model apparatuses, and in the case where the cameras C1 and C2 are installed at different heights, the distance information presented for the same queuing distance is different in their respective captured images. As shown in fig. 2C, the queuing distance of the vehicle acquired from the image captured by the camera C1 is S1, and the length of the acquired road segment is L1. The queuing distance of the vehicle acquired from the image captured by the camera C2 was S2, and the length of the acquired road segment was L2. As can be seen in FIG. 2C, the queuing distance S1 is greater than the queuing distance S2, and the length L1 of the road segment is less than the length L2. If the road sections shown in fig. 2A and 2B are not subjected to the classification processing, different results will be calculated from the acquired information, thereby causing a traffic flow recognition error.
According to an embodiment, the road segments are classified using first depth science models. Specifically, determining the category of the road segment according to the attribute data includes: and constructing a characteristic vector according to the road grade, the road traffic capacity, the road speed limit, the road width, the lane number and the time period, and inputting the characteristic vector into the first deep learning model so as to classify the road section by using the first deep learning model to determine the category of the road section.
In a specific embodiment, the first depth science model may adopt a LDA (linear Discriminant Analysis) model. The LDA model is a supervised learning model. The basic idea is to project a high-dimensional pattern sample to an optimal identification vector space to achieve the effects of extracting classification information and compressing the dimension of a feature space, and after projection, the pattern sample is ensured to have the maximum inter-class distance and the minimum intra-class distance in a new subspace, namely, the pattern has the optimal separability in the space. According to an embodiment, the first deep learning model is obtained by training an LDA model. The 6-dimensional feature vector is constructed by using road grade, road traffic capacity, road speed limit, road width, lane number and time period as input of a first deep learning model, and the first deep learning model outputs a category to which a certain road section belongs, for example, a number 3 indicating a third category. According to an embodiment, the number of classes to be output may be changed by adjusting model parameters during training of the LDA model.
According to the embodiment, after the classes to which the different road segments belong are obtained through the classification of the first deep learning model, the second deep learning model to be subjected to image analysis is selected for the different road segment classes. Specifically, the calculating the queuing coefficient of each lane on the road section according to the category of the road section and the image of the road section includes: and determining a second deep learning model according to the category of the road section, and inputting the image into the second deep learning model so as to calculate the queuing coefficient of each lane on the road section by using the second deep learning model.
In particular embodiments, the second deep learning model may employ a semantic segmentation model. The semantic segmentation model may distinguish different objects in an image from each other by training the model using image samples with pixel-level segmentation as training data. In the embodiment of the present disclosure, for each category of road segments determined using the first deep learning model, a second deep learning model corresponding thereto is created. The second deep learning models for different road segment classes correspond to the same identification criteria for traffic flow. Therefore, under the condition that different distance information is presented in different images with the same queuing distance in practice, the identification result meeting the actual condition can be obtained according to different identification standards.
In an embodiment of the present disclosure, a ratio of a queuing distance of the vehicle to a lane length is determined as a queuing coefficient. The larger the queuing coefficient, the larger the queuing distance of the vehicle, i.e. the more vehicles are queued. The smaller the queuing coefficient, the smaller the queuing distance of the vehicle, i.e. the fewer vehicles queued. In the case of identifying the queuing information based on the image, the queuing coefficient may be calculated by calculating the duty ratio of the vehicle with respect to the lane.
According to an embodiment, the second deep learning model receives the image of the road segment classified by the first deep learning model and outputs the number of pixels included in each of the plurality of specified objects in the image. Specifically, the calculating the queuing coefficient of each lane on the road section by using the second deep learning model comprises the following steps: and determining a first pixel number included in the lane image of each lane in the image and a second pixel number included in the vehicle image of the vehicle on each lane in the image by using a second deep learning model, and calculating a queuing coefficient of the lane according to the ratio of the second pixel number to the first pixel number.
FIG. 3 is a diagram illustrating the calculation of queuing coefficients according to an embodiment of the disclosure. As shown in fig. 3, the specified objects in the image include a lane 1 and vehicles 1 queued on the lane 1 (as shown in fig. 3, the vehicles 1 include two vehicles), and also include a lane 2 and vehicles 2 queued on the lane 2 (as shown in fig. 3, the vehicles 2 include one vehicle). The number of pixels N1 included in the lane 1, the number of pixels M1 included in the vehicle 1, the number of pixels N2 included in the lane 2, and the number of pixels M2 included in the vehicle 2 in the image can be obtained by using the second deep learning model. The queuing coefficient for lane 1 is calculated as M1/N1 and the queuing coefficient for lane 2 is calculated as M2/N2.
According to the method disclosed by the embodiment of the disclosure, by classifying different road sections and calculating the queuing coefficient for each classification, the identification error caused by different shooting visual angles of the image capturing device is eliminated. According to the method of the embodiment of the disclosure, vehicle queuing information of each lane can be acquired based on the image, so that the traffic flow can be identified for each lane.
According to an embodiment, identifying the traffic flow of each lane according to the queuing coefficient includes: identifying traffic flow of the lane as unblocked if the queuing coefficient is less than or equal to a first threshold; recognizing traffic flow of the lane as slow driving in the case where the queuing coefficient is greater than the first threshold value and less than or equal to the second threshold value; and in the case that the queuing coefficient is larger than a second threshold value, identifying the traffic flow of the lane as congestion. In the above identification process, the first threshold value and the second threshold value are respectively determined for the categories of different road segments.
According to an embodiment, the process of determining the first and second threshold values for the categories of different road segments is as follows. The method comprises the steps of respectively obtaining attribute data of road sections related to each image sample data in a plurality of image sample data, and classifying the road sections according to the attribute data by using a first deep learning model so as to obtain a plurality of category sets comprising different numbers of road sections. And for each category set, calculating a queuing coefficient of each lane on the road section according to the image of the road section, and determining the traffic flow of the lane according to the actual queuing distance (not the distance information acquired from the image) of each lane related to the image and the passing distance of the traffic lights, namely determining whether the lane is smooth, slow to run or congested. Therefore, the corresponding relation between the multiple groups of queuing coefficients and the actual traffic flow of the lane is obtained. Aggregating for each case, i.e. aggregating the cases where the lane is clear, may result in a first set comprising a plurality of queuing coefficients. Aggregating the situations where the lanes are slow driving may result in a second set comprising a plurality of queuing systems. And calculating the average value of the plurality of queuing coefficients aiming at the first set, and taking the average value as a first threshold value. And calculating the average value of the plurality of queuing coefficients aiming at the second set, and taking the tie value as a second threshold value. In other embodiments, the situations that the lanes are congested may be aggregated, and a third set including a plurality of queuing coefficients may be obtained. And calculating the average value of the plurality of queuing coefficients for the third set, and re-determining the second threshold value according to the obtained average value and the second threshold value determined according to the second set. In other embodiments, a maximum value, a minimum value, a median value, etc. of the plurality of queuing coefficients may be calculated for the first set and the second set, respectively, and the resulting maximum value, minimum value, or median value may be determined as the first threshold or the second threshold. Embodiments of the present disclosure are not limited thereto.
According to an embodiment, the first threshold value and the second threshold value for each road segment category may be determined in a model training phase, and the dictionary may be queried based on the determined first threshold value and second threshold value, such that identifying the traffic flow of the road ticket based on the model is by looking up the dictionary to obtain the first threshold value and the second threshold value.
Fig. 4 is a schematic diagram of a training process of a second deep learning model according to an embodiment of the present disclosure. As shown in fig. 4, a plurality of first sample data 402 is first classified using a first deep learning model 401 (as shown by operation (r) in fig. 4) to determine a category of each first sample data. Then, for the first sample data of each category, the first sample data is selected from the categories according to the attribute data of the road segment to which the first sample data relates (as shown by operation (c) in fig. 4) as second sample data 403. Next, pixel level labeling is performed on the second sample data (as shown in operation c in fig. 4). Then, the second deep learning model is trained using the labeled second sample data as training data (as shown in operation (r) in fig. 4), and a second deep learning model 404 for each category is obtained.
According to the embodiment, the operation (r) is the same as the process of performing classification prediction by using the first deep learning model, and details are not repeated here.
According to an embodiment, in operation (c), data within one month may be selected from the classified first sample data as the second sample data. In the process of selecting the training samples, sample balance is ensured, namely the number of samples of traffic light intersections of different types is balanced, and the number of samples of traffic light intersections of the same type is balanced in different time periods. In the selection process, the selection key value can be based on road section-time section.
According to the embodiment, in operation c, pixel-level labeling is performed on the second sample data. Specifically, heterogeneous labeling is performed on different lanes, and meanwhile heterogeneous labeling is performed on vehicles on different lanes. For example, for two lanes, there need to be 5 categories of labels, lane 1, lane 2, vehicle 1, vehicle 2, and others.
According to an embodiment, in operation (r), in the model training phase, the sampling range in the time domain is fixed. In some embodiments, the different provinces may also be sampled separately. The method comprises the steps of adopting a deep learning model based on image segmentation, utilizing a large number of labeled samples to carry out model training, utilizing K-fold cross validation to improve the generalization capability of the model, and simultaneously utilizing a weighting loss function and an oversampling and undersampling combined method to solve the problem of sample imbalance. Overfitting can be effectively prevented, and an offline model is obtained by integrating ten thousand-level samples.
Because different lanes in front of the traffic light intersection have different meanings, the traffic light intersection can be divided into a straight lane and a turning lane. When the existing navigation APP predicts different turning lanes, the turning tracks are generally adopted to calculate the traffic flow. Because the number of the turned tracks is small, the number of the tracks is increased by filling the turning tracks after the turning prediction is carried out on the non-turned tracks through the turning probability in most of the time. Noise is brought in the process, prediction errors are large, and the calculation of the traffic flow is also influenced. According to the method of the embodiment of the disclosure, the image of the turning lane can be directly identified, so according to the embodiment of the disclosure, the accuracy of identifying the traffic flow of the turning lane can be improved.
The scheme of the method aims to utilize camera information placed at a high position and an image segmentation algorithm to construct a mapping relation between the road traffic flow and lane characteristics, and obtain the characteristics of different lanes so as to obtain a prediction result of the road traffic flow and provide more accurate and timely navigation information for a user. The scheme can visually see the queuing condition of the vehicles in front of the lamp without being influenced by the track quality and quantity. Meanwhile, the conditions of different lanes can be well distinguished, and noise caused by the fact that the track is turned can be avoided.
According to the embodiment of the disclosure, the influence caused by the driving track can be avoided, the problems of inaccurate road condition release and low congestion recall rate are solved, and meanwhile, the timeliness problem can be well solved. Aiming at traffic light scenes, the recall rate of traffic information is improved, the reasonability of a user in road selection is guaranteed, the user is scientifically guided to go out, the misleading probability of the user is reduced, the time of going out is saved, and the perception experience of the user is continuously improved.
Fig. 5 illustrates a block diagram of an apparatus 500 for identifying a traffic flow of a road according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 for identifying traffic flow of a road includes an acquisition module 510, a category determination module 520, a coefficient calculation module 530, and an identification module 540.
According to an embodiment, the acquisition module 510 is configured to acquire attribute data of a road segment at a traffic light intersection and an image of the road segment. The category determination module 520 is configured to determine a category of the road segment from the attribute data. The coefficient calculation module 530 is configured to calculate a queuing coefficient for each lane on the road segment, respectively, from the category of the road segment and the image of the road segment. The identification module 540 is configured to identify the traffic flow of each lane according to the queuing coefficients.
The specific operations of the above functional modules may be obtained by referring to the operation steps of the method 100 for identifying a traffic flow of a road in the foregoing embodiment, and are not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a method of identifying traffic flow of roads. For example, in some embodiments, the method of identifying traffic flow for a roadway may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the above described method of identifying traffic flow for roads may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g. by means of firmware) to perform the method of identifying the traffic flow of a road.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method of identifying traffic flow for a roadway, comprising:
acquiring attribute data of a road section at a traffic light intersection and an image of the road section;
determining the category of the road section according to the attribute data;
respectively calculating a queuing coefficient of each lane on the road section according to the type of the road section and the image of the road section; and
and identifying the traffic flow of each lane according to the queuing coefficient.
2. The method of claim 1, wherein the attribute data includes spatial attribute data including road grade, road traffic capacity, road speed limit, road width, and number of lanes, and temporal attribute data indicating a current time period.
3. The method of claim 2, wherein the determining the category of the road segment from the attribute data comprises:
constructing a feature vector according to the road grade, the road traffic capacity, the road speed limit, the road width, the lane number and the time period; and
the feature vectors are input to a first deep learning model for classifying road segments with the first deep learning model to determine categories of the road segments.
4. The method of claim 3, wherein the first deep learning model comprises a linear discriminant analysis model.
5. The method of any one of claims 1 to 4, wherein the calculating a queuing coefficient for each lane on the road segment separately from the category of the road segment and the image of the road segment comprises:
determining a second deep learning model according to the category of the road section; and
and inputting the image into the second deep learning model so as to calculate the queuing coefficient of each lane on the road section by using the second deep learning model.
6. The method of claim 5, wherein the second deep learning model comprises a semantic segmentation model.
7. The method of claim 5, wherein the calculating a queuing coefficient for each lane on the road segment using the second deep learning model comprises:
determining a first number of pixels included in a lane image of each lane in the image and a second number of pixels included in a vehicle image of a vehicle on each lane in the image using the second deep learning model; and
and calculating the queuing coefficient of the lane according to the ratio of the second pixel quantity to the first pixel quantity.
8. The method of claim 7, wherein the identifying the traffic flow for each lane according to the queuing coefficient comprises:
identifying traffic flow of the lane as clear if the queuing coefficient is less than or equal to a first threshold;
identifying traffic flow of the lane as slow-driving if the queuing coefficient is greater than the first threshold and less than or equal to a second threshold; and
and in the case that the queuing coefficient is larger than the second threshold value, identifying the traffic flow of the lane as congestion.
9. The method of claim 8, wherein the identifying the traffic flow for each lane according to the queuing coefficient further comprises:
and determining the first threshold value and the second threshold value according to the category of the road section related to the queuing coefficient.
10. The method of claim 1, further comprising: acquiring an image of the road segment with an image capturing device disposed at a traffic light.
11. The method of claim 3, further comprising:
classifying a plurality of first sample data by using the first deep learning model to determine a category of each first sample data;
selecting first sample data from the categories according to attribute data of road sections related to the first sample data as second sample data for the first sample data of each category;
performing pixel level labeling on the second sample data; and
and training the second deep learning model by using the labeled second sample data as training data.
12. An apparatus for recognizing traffic flow of a road, comprising:
the acquisition module is configured to acquire attribute data of a road section at a traffic light intersection and an image of the road section;
a category determination module configured to determine a category of the road segment according to the attribute data;
the coefficient calculation module is configured to calculate a queuing coefficient of each lane on the road section according to the type of the road section and the image of the road section; and
and the identification module is configured to identify the traffic flow of each lane according to the queuing coefficient.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.
CN202110127819.4A 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road Active CN112784789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110127819.4A CN112784789B (en) 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110127819.4A CN112784789B (en) 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road

Publications (2)

Publication Number Publication Date
CN112784789A true CN112784789A (en) 2021-05-11
CN112784789B CN112784789B (en) 2023-08-18

Family

ID=75759908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110127819.4A Active CN112784789B (en) 2021-01-29 2021-01-29 Method, device, electronic equipment and medium for identifying traffic flow of road

Country Status (1)

Country Link
CN (1) CN112784789B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09190533A (en) * 1996-01-11 1997-07-22 Mitsubishi Heavy Ind Ltd Vehicle detecting device
CN105070042A (en) * 2015-07-22 2015-11-18 济南市市政工程设计研究院(集团)有限责任公司 Modeling method of traffic prediction
CN106097726A (en) * 2016-08-23 2016-11-09 苏州科达科技股份有限公司 The detection determination in region, traffic information detection method and device
CN109714421A (en) * 2018-12-28 2019-05-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent network based on bus or train route collaboration joins automobilism system
CN110364008A (en) * 2019-08-16 2019-10-22 腾讯科技(深圳)有限公司 Road conditions determine method, apparatus, computer equipment and storage medium
US20190325736A1 (en) * 2018-04-19 2019-10-24 Here Global B.V. Method, apparatus, and system for propagating learned traffic sign data in a road network
CN110889328A (en) * 2019-10-21 2020-03-17 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition
WO2020107523A1 (en) * 2018-11-27 2020-06-04 上海芯仑光电科技有限公司 Vehicle lane line detection method, vehicle, and computing device
CN111460921A (en) * 2020-03-13 2020-07-28 华南理工大学 Lane line detection method based on multitask semantic segmentation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09190533A (en) * 1996-01-11 1997-07-22 Mitsubishi Heavy Ind Ltd Vehicle detecting device
CN105070042A (en) * 2015-07-22 2015-11-18 济南市市政工程设计研究院(集团)有限责任公司 Modeling method of traffic prediction
CN106097726A (en) * 2016-08-23 2016-11-09 苏州科达科技股份有限公司 The detection determination in region, traffic information detection method and device
US20190325736A1 (en) * 2018-04-19 2019-10-24 Here Global B.V. Method, apparatus, and system for propagating learned traffic sign data in a road network
WO2020107523A1 (en) * 2018-11-27 2020-06-04 上海芯仑光电科技有限公司 Vehicle lane line detection method, vehicle, and computing device
CN109714421A (en) * 2018-12-28 2019-05-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent network based on bus or train route collaboration joins automobilism system
CN110364008A (en) * 2019-08-16 2019-10-22 腾讯科技(深圳)有限公司 Road conditions determine method, apparatus, computer equipment and storage medium
CN110889328A (en) * 2019-10-21 2020-03-17 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition
CN111460921A (en) * 2020-03-13 2020-07-28 华南理工大学 Lane line detection method based on multitask semantic segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIN ZHU ET AL: "Cooperative Traffic Light Control Based on Semi-real-time Processing", 《JOURNAL OF AUTOMATION AND CONTROL ENGINEERING》, vol. 4, no. 1 *
巩帅;: "交通流量数据的分类规则挖掘", 计算机工程与应用, no. 06 *

Also Published As

Publication number Publication date
CN112784789B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN110751828B (en) Road congestion measuring method and device, computer equipment and storage medium
CN113066285B (en) Road condition information determining method and device, electronic equipment and storage medium
JP7292355B2 (en) Methods and apparatus for identifying vehicle alignment information, electronics, roadside equipment, cloud control platforms, storage media and computer program products
CN112818792A (en) Lane line detection method, lane line detection device, electronic device, and computer storage medium
CN114170797B (en) Method, device, equipment, medium and product for identifying traffic restriction intersection
CN111681417B (en) Traffic intersection canalization adjusting method and device
CN112559371A (en) Automatic driving test method and device and electronic equipment
CN112883236A (en) Map updating method, map updating device, electronic equipment and storage medium
CN114802303A (en) Obstacle trajectory prediction method, obstacle trajectory prediction device, electronic device, and storage medium
CN114596709B (en) Data processing method, device, equipment and storage medium
CN115359471A (en) Image processing and joint detection model training method, device, equipment and storage medium
CN114676178A (en) Accident detection method and device and electronic equipment
CN112926630A (en) Route planning method, route planning device, electronic equipment and computer readable medium
CN116794619A (en) Radar debugging processing method and device, electronic equipment and storage medium
US20230065341A1 (en) Road data monitoring method and apparatus, electronic device and storage medium
CN115936282A (en) Method and device for optimizing score model, electronic equipment and storage medium
CN112784789B (en) Method, device, electronic equipment and medium for identifying traffic flow of road
CN115526837A (en) Abnormal driving detection method and device, electronic equipment and medium
CN115050000A (en) Running scene recognition method and device, computer equipment and storage medium
CN115206102A (en) Method, apparatus, electronic device, and medium for determining traffic path
CN114724113A (en) Road sign identification method, automatic driving method, device and equipment
CN112861701A (en) Illegal parking identification method and device, electronic equipment and computer readable medium
CN112258880B (en) Vehicle management system based on intelligent traffic
CN113947897B (en) Method, device and equipment for acquiring road traffic condition and automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant