CN113705273A - Traffic sign identification method and device and electronic equipment - Google Patents

Traffic sign identification method and device and electronic equipment Download PDF

Info

Publication number
CN113705273A
CN113705273A CN202010429989.3A CN202010429989A CN113705273A CN 113705273 A CN113705273 A CN 113705273A CN 202010429989 A CN202010429989 A CN 202010429989A CN 113705273 A CN113705273 A CN 113705273A
Authority
CN
China
Prior art keywords
traffic sign
lane
determining
image frame
road section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010429989.3A
Other languages
Chinese (zh)
Inventor
郑航
陈安猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN202010429989.3A priority Critical patent/CN113705273A/en
Publication of CN113705273A publication Critical patent/CN113705273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a traffic sign identification method, a traffic sign identification device and electronic equipment, wherein the method comprises the following steps: identifying a lane line in the collected image frame, and determining the position information of the lane line in a vehicle body coordinate system; when the situation that a branch road junction exists in the collected image frame is determined, determining a driving road section and a non-driving road section in the collected image frame based on the position information of the lane line in the vehicle body coordinate system; identifying a traffic sign in the acquired image frame and determining position information of the traffic sign in the acquired image frame; and determining the traffic sign matched with the driving road section as a valid traffic sign and determining the traffic sign matched with the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame. The method can optimize the recognition effect of the traffic sign and provide more accurate data support for vehicle driving control.

Description

Traffic sign identification method and device and electronic equipment
Technical Field
The present disclosure relates to the field of driving assistance technologies, and in particular, to a method and an apparatus for identifying a traffic sign, and an electronic device.
Background
During the driving process of the vehicle, especially when the vehicle is driven on a highway section, a lot of traffic signs such as speed limit signs, weight limit signs, stop prohibition signs and the like can pass through one road. When a driver drives to a certain traffic sign, such as a speed limit sign, if the specific speed limit in the traffic sign can be automatically identified, the driver can be reminded to adjust the driving speed in time so as to assist the driver to drive safely and prevent accidents caused by overspeed and the like.
In the current traffic sign recognition scheme, the lane line is usually recognized by using color features and edge feature information. The color feature and the edge feature information have certain limitations, the lane line identification accuracy is low, and the detection effect is generally unstable.
In addition, in the conventional traffic sign recognition scheme, when a branch intersection exists, it is impossible to distinguish whether or not the detected traffic sign is a traffic sign of a traveling link.
Disclosure of Invention
In view of the above, the present application provides a traffic sign identification method, a traffic sign identification device, and an electronic device.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided a traffic sign identification method, including:
identifying a lane line in the collected image frame, and determining the position information of the lane line in a vehicle body coordinate system;
when the situation that a branch road junction exists in the collected image frame is determined, determining a driving road section and a non-driving road section in the collected image frame based on the position information of the lane line in the vehicle body coordinate system;
identifying a traffic sign in the acquired image frame and determining position information of the traffic sign in the acquired image frame;
and determining a traffic sign matched with the driving road section as a valid traffic sign and determining a traffic sign matched with the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame.
According to a second aspect of embodiments of the present application, there is provided a traffic sign recognition apparatus including:
the identification unit is used for identifying lane lines in the acquired image frames;
the first determining unit is used for determining the position information of the lane line in the vehicle body coordinate system;
the second determining unit is used for determining a driving road section and a non-driving road section in the collected image frames based on the position information of the lane line in the vehicle body coordinate system when the situation that the branch road junction exists in the collected image frames is determined;
the identification unit is also used for identifying the traffic signs in the acquired image frames;
the first determining unit is further used for determining the position information of the traffic sign in the acquired image frame;
and a third determining unit, configured to determine a traffic sign matching the driving road section as a valid traffic sign and determine a traffic sign matching the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame.
According to a third aspect of embodiments of the present application, there is provided an electronic device, including a processor and a machine-readable storage medium, the machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being configured to execute the machine-executable instructions to implement the above-mentioned traffic sign recognition method.
According to a fourth aspect of the embodiments of the present application, there is provided a machine-readable storage medium having stored therein machine-executable instructions, which when executed by a processor, implement the above-mentioned traffic sign recognition method.
The embodiment of the application has the following beneficial effects:
identifying a lane line in the collected image frame and determining the position information of the lane line in a vehicle body coordinate system; when the situation that a branch road junction exists in the collected image frame is determined, determining a driving road section and a non-driving road section in the collected image frame based on the position information of the lane line in the vehicle body coordinate system; identifying a traffic sign in the acquired image frame and determining position information of the traffic sign in the acquired image frame; based on the driving road section and the non-driving road section in the collected image frame and the position information of the traffic sign in the collected image frame, the traffic sign matched with the driving road section is determined as an effective traffic sign, and the traffic sign matched with the non-driving road section is determined as an ineffective traffic sign, so that the recognition effect of the traffic sign is optimized, and more accurate data support is provided for vehicle driving control.
Drawings
FIG. 1 is a schematic flow chart of a traffic sign recognition method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a process of determining a driving road segment and a non-driving road segment in an acquired image frame according to an embodiment of the present application;
FIG. 3 is a schematic flow diagram of speed limit control according to an embodiment of the present application;
FIG. 4A is a graphical illustration of an example segmentation result for a lane marking and pavement marker in accordance with an embodiment of the present application;
FIG. 4B is a schematic diagram of determining a current lane according to an embodiment of the present application;
FIG. 4C is a schematic illustration of determining travel segments and non-travel segments in accordance with an embodiment of the present application;
FIG. 4D is a schematic diagram of the speed limit sign matching with the lane according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a traffic sign recognition apparatus according to an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one type of device from another. For example, a first device may also be referred to as a second device, and similarly, a second device may also be referred to as a first device, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The traffic sign recognition method according to the embodiment of the present application is described in more detail below, but should not be limited thereto.
Referring to fig. 1, a flow chart of a traffic sign recognition method according to an embodiment of the present disclosure is schematically shown, and as shown in fig. 1, the traffic sign recognition method may include the following steps:
in the embodiment of the application, the traffic sign identification method can be applied to a vehicle control system, such as a central control system of a vehicle.
And S100, identifying the lane lines in the collected image frames, and determining the position information of the lane lines in the vehicle body coordinate system.
In the embodiment of the application, images, sounds and the like in the running process of the vehicle can be collected through the vehicle-mounted video collecting device. The onboard video capture device may include, but is not limited to, a tachograph or other onboard camera. The vehicle control system can acquire data such as images and sounds acquired by the vehicle-mounted video acquisition equipment.
For convenience of description and understanding, the following description will be made by taking the vehicle-mounted video capture device as a driving recorder as an example, but the invention is not limited thereto.
During the running process of the vehicle, for each image frame collected by the automobile data recorder, the lane line in the image frame can be identified, and the position information of the lane line in the vehicle body coordinate system is determined.
In one example, a segmentation result of a lane line in the captured image frame may be identified using a segmentation algorithm based on deep learning, and a lane line (e.g., a location of the lane line in the image frame) in the captured image frame may be identified based on the segmentation result.
Illustratively, the segmentation algorithm can include, but is not limited to, example segmentation and semantic segmentation, wherein the example segmentation screens out feature vectors belonging to the lane lines according to a semantic segmentation map on the basis of the semantic segmentation, and clusters the feature vectors to finally obtain an example segmentation result of the lane lines.
In one example, the determining the position information of the lane line in the vehicle body coordinate system in step S100 may include:
and performing inverse perspective transformation on the identification result of the lane line in the acquired image frame, and determining the position information of the lane line in the vehicle body coordinate system.
For example, the position information of the lane line in the vehicle body coordinate system may be determined by means of inverse perspective transformation based on the recognition result of the lane line in the image frame.
For example, when the segmentation result of the lane line in the acquired image frame is identified by using the segmentation algorithm based on the deep learning, the segmentation result of the lane line in the acquired image frame may be subjected to an inverse perspective transformation (i.e., a transformation of the image coordinate system into the vehicle body coordinate system) to determine the position of the lane line in the vehicle body coordinate system, such as a curve equation of the lane line in the vehicle body coordinate system.
For example, the coordinate system of the vehicle body is established with the position of the vehicle body as an origin (the midpoint of the lower boundary of the corresponding image).
And step S110, when the situation that the branch road junction exists in the collected image frame is determined, determining a driving road section and a non-driving road section in the collected image frame based on the position information of the lane line in the vehicle body coordinate system.
In the embodiment of the present application, when the position information of the lane line in the body coordinate system is determined in the manner described in step S100, if it is determined that there is a branch intersection in the captured image frame, the travel section and the non-travel section in the captured image frame may be determined based on the position information of the lane line in the body coordinate system.
In one example, the presence of a branch intersection in the acquired image frame is determined by:
and when the fact that the diversion line exists in the collected image frame and the lanes exist on the two sides of the diversion line is determined, determining that a branch road junction exists in the collected image frame.
For example, with the segmentation algorithm based on deep learning, in addition to identifying lane lines in the captured image frames, other road surface markers, such as guide lines or road edges, may be identified in the captured image frames.
For example, for a specific implementation of identifying the diversion line in the image frame by using the segmentation algorithm based on the deep learning, reference may be made to the above-mentioned implementation of identifying the lane line in the image frame by using the segmentation algorithm based on the deep learning, and details of the embodiment of the present application are not described herein.
For a certain collected image frame, when a diversion line exists in the image frame, whether lanes exist on two sides of the diversion line or not can be determined based on lane line distribution in the image frame, and if yes, a branch road junction exists in the image frame; otherwise, determining that no branch road exists in the image frame.
In the embodiment of the application, when it is determined that the branch road junction exists in the acquired image frame, the driving road section and the non-driving road section in the acquired image frame can be determined based on the position information of the lane line in the vehicle body coordinate system.
The travel section and the non-travel section are determined for the vehicle, and the travel section and the non-travel section may be different for different vehicles. For any vehicle, if the vehicle is positioned on the left side of the diversion line, the road on the left side of the diversion line is a driving road section, and the road on the right side of the diversion line is a non-driving road section; if the vehicle is positioned on the right side of the diversion line, the road on the right side of the diversion line is a driving road section, and the road on the left side of the diversion line is a non-driving road section.
And step S120, identifying the traffic sign in the acquired image frame, and determining the position information of the traffic sign in the acquired image frame.
In the embodiment of the present application, for any captured image frame, in addition to the processing in the manner described in step S100 to step S110, the traffic sign in the image frame may be identified, and the position information of the traffic sign in the image frame may be determined.
For example, an object detection network may be utilized to identify traffic signs in captured image frames and determine location information for the traffic signs in the image frames.
For example, the position information of the traffic sign in the image frame may be characterized based on the center position of the recognition frame of the traffic sign.
It should be noted that, in the embodiment of the present application, there is no necessary timing relationship between steps S100 to S110 and step S120, that is, the operations in steps S100 to S110 may be performed first, and then the operation in step S120 may be performed; alternatively, the operation in step S120 may be performed first, and then the operations in steps S100 to S110 may be performed; alternatively, the operations in step S100 to step S110 and step S120 may be executed in parallel, which is not limited in this embodiment of the application.
Step S130, determining a traffic sign matching the driving road section as a valid traffic sign and determining a traffic sign matching the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame.
In the embodiment of the application, the traffic sign matched with the driving road section can be determined as the effective traffic sign and the traffic sign matched with the non-driving road section can be determined as the ineffective traffic sign based on the driving road section and the non-driving road section in the collected image frame and the position information of the traffic sign in the collected image frame.
It can be seen that, in the flow shown in fig. 1, in the case of a branch intersection, a driving road section and a non-driving road section in the collected image frame are identified, and then a traffic sign matched with the driving road section and a traffic sign matched with the non-driving road section are respectively identified, and the traffic sign matched with the non-driving road section is determined as an invalid traffic sign, so that the identification effect of the traffic sign is optimized, and more accurate data support is provided for vehicle driving control.
In one embodiment of the present application, as shown in fig. 2, in step S110, determining the driving road segment and the non-driving road segment in the collected image frame based on the position information of the lane line in the vehicle body coordinate system may be implemented by the following steps:
and step S111, determining the lane where the target vehicle is located based on the position information of the lane line in the vehicle body coordinate system and the position information of the target vehicle in the vehicle body coordinate system.
Step S112, determining a road on the first side of the flow guide line as a driving road section and determining a road on the second side of the flow guide line as a non-driving road section based on the relative position of the lane where the target vehicle is located and the flow guide line; wherein the lane in which the target vehicle is located is on the first side of the diversion line.
In the embodiment of the application, the target vehicle does not refer to a fixed vehicle, but refers to any vehicle which realizes traffic sign recognition by adopting the scheme provided by the embodiment of the application, and the embodiment of the application is not repeated in the following.
For example, when it is determined that a branch road exists in the acquired image frames, the lane where the target vehicle is located may be determined based on the position information of the lane line in the body coordinate system and the position information of the target vehicle in the body coordinate system (i.e., the origin of the body coordinate system).
For example, in the vehicle body coordinate system, a lane line on the left side of the origin closest to the origin may be determined as a left lane line of the lane where the current vehicle is located, a lane line on the right side of the origin closest to the origin may be determined as a right lane line of the lane where the current vehicle is located, and then, the lane where the target vehicle is located may be determined.
When the lane where the target vehicle is located is determined, the relative position of the lane where the target vehicle is located and the diversion line may be determined (that is, the lane where the target vehicle is located is determined to be on the left side or the right side of the diversion line), and based on the relative position of the lane where the target vehicle is located and the diversion line, a road on the first side of the diversion line (assuming that the lane where the target vehicle is located is on the first side of the diversion line) is determined as a driving road section, and a road on the second side of the diversion line is determined as a non-driving road section.
Illustratively, when the first side is the left side, the second side is the right side; when the first side is the right side, the second side is the left side.
In one embodiment of the present application, in step S130, the traffic sign matching with the traveling road segment and the traffic sign matching with the non-traveling road segment may be determined by:
for any traffic sign, mapping the center position of the traffic sign in the acquired image frame to a lane area to determine a lane matched with the traffic sign;
when the lane is the lane of the driving road section, determining that the traffic sign is matched with the driving road section;
and when the lane is the lane of the non-driving road section, determining that the traffic sign is matched with the non-driving road section.
For example, when the position information of the traffic sign in the acquired image frame is determined based on the manner described in step S120, for any traffic sign, the center position of the traffic sign in the acquired image frame may be mapped to the lane area to determine the lane matching the traffic sign.
For example, the center position of the traffic sign in the acquired image frame may be extended downward (i.e., an extension line is drawn downward with the center point of the traffic sign in the image frame as a starting point), and the lane which intersects the extension line first may be determined as the lane matching the traffic sign.
For any traffic sign, when the lane that the traffic sign matches is determined, it may be further determined that the lane is a lane of a travel segment or a lane of a non-travel segment.
If the lane is the lane of the driving road section, determining that the traffic sign is matched with the driving road section;
and if the lane is the lane of the non-driving road section, determining that the traffic sign is matched with the non-driving road section.
In one embodiment of the present application, after determining the traffic sign matched with the traveling section as a valid traffic sign and determining the traffic sign matched with the non-traveling section as an invalid traffic sign in step S130, the method may further include:
and performing vehicle running control based on the effective traffic sign.
For example, when a valid traffic sign in the captured image frame is determined, vehicle travel control may be performed on the target vehicle based on the valid traffic sign.
In one example, the above vehicle driving control based on the valid traffic sign may include:
controlling the running speed of the target vehicle based on the speed limit signs in the effective traffic signs;
and/or the first and/or second light sources,
controlling the running of the target vehicle based on the weight limit sign in the effective traffic sign and the weight of the target vehicle;
and/or the first and/or second light sources,
controlling the running of the target vehicle based on the vehicle type limit mark in the effective traffic mark and the type of the target vehicle;
and/or the first and/or second light sources,
and controlling the running of the target vehicle based on the height limit mark in the effective traffic mark and the height of the target vehicle.
For example, valid traffic signs may include, but are not limited to, speed limit signs, weight limit signs, height limit signs, and vehicle type limit signs.
For example, taking the control of the driving speed of the vehicle as an example, when the effective traffic signs in the acquired image frames are determined, the speed limit signs in each effective traffic sign can be further identified, and the driving speed of the vehicle is controlled based on the identified speed limit signs.
For example, controlling the traveling speed of the vehicle may include, but is not limited to, controlling the traveling speed of the vehicle not to exceed a preset maximum speed limit (for an unmanned scene) based on the speed limit sign or voice-broadcasting a speed limit prompt message to prompt the driver to control the traveling speed of the vehicle (for a manned scene).
It should be noted that, in the embodiment of the present application, the category of the traffic sign in the acquired image frame may be identified first, and then the traffic sign of the specified category is identified as an effective traffic sign or an ineffective traffic sign according to the requirement.
For example, when the height limit flag is present in the valid traffic flag, it may be determined whether the height of the target vehicle exceeds the height indicated by the height limit flag, and if so, the target vehicle may be prohibited from continuing to travel on the current travel route, for example, the target vehicle may be controlled to travel on a road where there is no height limit or where the height limit exceeds the height of the target vehicle.
The implementation of weight limitation and vehicle type limitation is similar to the implementation of height limitation, and details are not repeated here in the embodiments of the present application.
In one embodiment of the present application, the method for identifying a traffic sign may further include:
when a plurality of traffic signs of the same type exist in the acquired image frames, respectively determining lanes matched with the traffic signs in the plurality of traffic signs of the same type;
and performing vehicle running control on the target vehicle based on the traffic sign matched with the lane where the target vehicle is located.
For example, considering an actual scene, there may be a case where the driving requirements of different lanes of the same road section are different.
Taking speed limit as an example, the highest speed limit of different lanes in the same road section may be different.
Accordingly, when a plurality of traffic signs of the same type are detected in the acquired image frames by using the object detection network, the lanes matched with the traffic signs in the plurality of traffic signs of the same type can be respectively determined.
For example, the specific implementation of determining the lane matched with the traffic sign may refer to the related description in the foregoing embodiments, and the description of the embodiments of the present application is omitted here.
When the lane matching each traffic sign is determined, vehicle travel control may be performed on the target vehicle based on the traffic sign matching the lane in which the target vehicle is located.
It should be noted that, in the embodiment of the present application, for a scene where a branch intersection exists, since the lane matching the traffic sign is already determined when determining whether the traffic sign is a valid traffic sign, for a case where a plurality of valid traffic signs of the same type exist in the acquired image frames, the target vehicle may be controlled to travel based on the valid traffic sign of the type matching the lane where the target vehicle exists.
In one example, the determining the lanes matched with the traffic signs in the plurality of traffic signs of the same type respectively may include:
respectively mapping the central position of each traffic sign in the acquired image frame to a lane area to obtain corresponding mapping points;
when a plurality of mapping points are positioned in the same target lane, determining the traffic sign corresponding to the target mapping point as a traffic sign matched with the target lane; the target mapping point is the mapping point with the minimum distance to the center line position of the target lane in the plurality of mapping points;
and/or, the traffic sign corresponding to the mapping point on the right side of the target mapping point in the plurality of mapping points is determined as the traffic sign matched with the adjacent lane on the right side of the target lane.
By way of example, considering that traffic signs appear in video frames captured by a tachograph, typically at the far end of the video frame, and the distance between the lane lines at the far end of the video frame in the picture is smaller (i.e. the width of the lane at the far end of the video frame in the picture is narrower, especially the lanes at two sides of the road), therefore, when there are a plurality of traffic signs of the same type in the captured image frame, when the lane matched with the traffic sign is determined according to the mode that the central position of the traffic sign is mapped to the lane area, it may happen that a traffic sign of a certain lane is mapped to an adjacent lane, and therefore, in order to accurately determine the lane to which the traffic sign matches, the lane matched with the traffic sign can be determined based on the distance from the mapping point obtained by mapping the center position of the traffic sign to the lane area to the center line position of the lane.
When the acquired image frame includes a plurality of traffic signs of the same type, the center positions of the traffic signs in the acquired image frame may be mapped to the lane area to obtain corresponding mapping points (one traffic sign corresponds to one mapping point), and it may be determined whether a plurality of mapping points are located in the same lane.
For example, when a mapping point corresponding to a traffic sign is located on a lane line of a lane, it may be determined that the mapping point is located in the lane.
When there are a plurality of mapping points in the same lane (referred to herein as a target lane), the distance from each of the plurality of mapping points to the center line position of the target lane may be calculated, and a traffic sign corresponding to the mapping point (referred to herein as a target mapping point) having the smallest distance to the center line position of the target lane among the plurality of mapping points may be determined as a traffic sign matching the target lane.
In an actual scene, when a plurality of traffic signs of the same type exist in an image frame acquired by a vehicle traveling recorder, the plurality of traffic signs of the same type are generally in one-to-one correspondence with each lane (that is, one lane corresponds to one traffic sign of the type), and a traffic sign corresponding to the leftmost lane or the rightmost lane of a road is generally mapped to an adjacent lane, and a traffic sign corresponding to one lane is generally not mapped to a non-adjacent lane, therefore, when a traffic sign matching a target lane is determined, if mapping points on the left side of the target mapping point are included in a plurality of mapping points of the target lane, the traffic sign corresponding to the mapping point on the left side of the target mapping point in the plurality of mapping points of the target lane can be determined as a traffic sign matching the adjacent lane on the left side of the target lane; if the mapping points located on the right side of the target mapping point are included in the plurality of mapping points located on the target lane, the traffic sign corresponding to the mapping point located on the right side of the target mapping point among the plurality of mapping points located on the target lane may be determined as a traffic sign matching the adjacent lane on the right side of the target lane.
When the mapping points are located in different lanes, the matching relationship between the lanes and the traffic signs may be determined based on the lanes in which the mapping points are located.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.
In this embodiment, for example, speed limit control in an unmanned scene is performed, and a vehicle acquires images and sounds in the driving process through a driving recorder.
Referring to fig. 3, a flow diagram of the speed limit control in this embodiment may be as shown in fig. 3, and each flow in the speed limit control is described below.
Identification of lane lines and pavement markings
1. Segmentation algorithm based on deep learning
Due to the complexity and diversity of road environments, accurate detection of roads still faces many problems, such as shadows, illumination variations, road surface damage, vehicle occlusion, and background interference. Traditional segmentation algorithms, such as K-means clustering and Grab-Cut, perform image segmentation according to the low-order visual information of image pixels, and the segmentation effect is difficult to meet the requirements.
Therefore, in order to optimize the segmentation effect, an example segmentation algorithm based on deep learning may be adopted, and the example segmentation algorithm screens out feature vectors belonging to the lane lines and the road surface markers according to the semantic segmentation map on the basis of semantic segmentation, and clusters the feature vectors to finally obtain an example segmentation result of the lane lines and the road surface markers, and a schematic diagram thereof may be as shown in fig. 4A.
2. Lane division
And mapping the lane lines in the image coordinate system to a vehicle body coordinate system through inverse perspective transformation, and dividing the lanes according to the left-right relation of the lanes and the relative position relation of the lanes and the vehicle.
As shown in fig. 4B, the lane line 6 is a left lane line closest to the origin of coordinates (the midpoint of the lower boundary of the image, i.e., the position of the target vehicle), the lane line 4 is a right lane line closest to the origin of coordinates, and the area between the two is the current lane (i.e., the lane in which the target vehicle is located).
3. Identifying travel and non-travel segments
And identifying a branch road junction based on the acquired flow guide lines in the image frames, and determining a driving road section and a non-driving road section based on the relative positions of the lane line of the current lane and the flow guide lines.
Exemplarily, expanding the left edge and the right edge of the division result of the guide line under the image coordinate, classifying the lane line close to the division result of the guide line into the guide line, wherein if the current lane is on the left side of the guide line, the left side of the guide line is the driving road section of the target vehicle, and the right side of the guide line is the non-driving road section; and if the current lane is on the right side of the flow guide line, the right side of the flow guide line is the running road section of the target vehicle, and the left side of the flow guide line is the non-running road section. Identification of speed limit sign
And identifying the speed limit sign in the acquired image frame by using the target detection network, and determining the position information of the speed limit sign in the image frame.
Matching of lane and speed limit sign
And leading an extension line downwards from the central point of the speed limit sign recognition frame under an image coordinate system, wherein the lane which is intersected firstly is the lane matched with the speed limit sign.
For example, as shown in fig. 4C, in the image coordinate system, it is assumed that a left lane line of the current lane is lane line 1, a right lane line is lane line 2, a left line and a right line of the diversion line are a and b, respectively, and a central point of the speed limit sign recognition frame is S. Based on the line and point of the image coordinate system and the position relation between the lines:
if the lines a and b are on the left side of the line 1 and the lines S are on the left sides of the lines a and b, determining that the speed limit sign is a speed limit sign of a non-driving road section and is an invalid speed limit sign;
if both the a and the b are on the right side of the line 2 and the S is on the right side of the a and the b, determining that the speed limit sign is the speed limit sign of the non-driving road section and is an invalid speed limit sign;
if the S and the left and right lane lines 1 and 2 of the vehicle are on the left sides of the lines a and b, or the S and the left and right lane lines 1 and 2 of the vehicle are on the right sides of the lines a and b, the speed limit sign is a speed limit sign of a driving road section and is an effective speed limit sign.
In this embodiment, the target vehicle may control the travel speed of the target vehicle based on the identified valid speed limit sign.
It can be seen that, in this embodiment, on one hand, the lane where the target vehicle is located may be determined based on the position information of the lane line in the vehicle body coordinate system, and when there is a branch intersection, the driving road section and the non-driving road section may be determined based on the relative position relationship between the lane where the target vehicle is located and the diversion line; on the other hand, the traffic sign (taking the speed limit sign as an example) can be identified based on the target detection network, the matching of the lane and the speed limit sign is realized, the determination of the vehicle position and the identification of the traffic sign are realized without adopting navigation Positioning equipment such as a Global Positioning System (GPS) and an electronic map in the traditional scheme, the applicable scene of the scheme is expanded, and the implementation cost is reduced.
Four, multi-lane different speed limit sign differentiation
For the condition that a plurality of speed limit signs exist in the acquired image frame, the center point of the recognition frame of the speed limit sign can be mapped to the lane area under the image coordinate system, so that the speed limit sign is matched with the lane, the schematic diagram can be shown as fig. 4D, so as to determine the speed limit sign of the current lane, and the running speed of the target vehicle is controlled based on the speed limit sign of the current lane.
In the embodiment of the application, the position information of the lane line in the vehicle body coordinate system is determined by identifying the lane line in the collected image frame; when the situation that a branch road junction exists in the collected image frame is determined, determining a driving road section and a non-driving road section in the collected image frame based on the position information of the lane line in the vehicle body coordinate system; identifying a traffic sign in the acquired image frame and determining position information of the traffic sign in the acquired image frame; based on the driving road section and the non-driving road section in the collected image frame and the position information of the traffic sign in the collected image frame, the traffic sign matched with the driving road section is determined as an effective traffic sign, and the traffic sign matched with the non-driving road section is determined as an ineffective traffic sign, so that the recognition effect of the traffic sign is optimized, and more accurate data support is provided for vehicle driving control.
The methods provided herein are described above. The following describes the apparatus provided in the present application:
referring to fig. 5, a schematic structural diagram of a traffic sign recognition apparatus according to an embodiment of the present application is shown in fig. 5, where the traffic sign recognition apparatus may include:
an identifying unit 510 for identifying a lane line in the acquired image frame;
a first determining unit 520, configured to determine position information of the lane line in a vehicle body coordinate system;
a second determining unit 530 for determining a traveling road section and a non-traveling road section in the captured image frame based on the position information of the lane line in the vehicle body coordinate system when it is determined that the branch road exists in the captured image frame;
the identifying unit 510 is further configured to identify a traffic sign in the acquired image frame;
the first determining unit 520 is further configured to determine position information of the traffic sign in the acquired image frame;
a third determining unit 540, configured to determine a traffic sign matching the driving road section as a valid traffic sign and determine a traffic sign matching the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame.
In one possible embodiment, the determining unit 520 determines the position information of the lane line in the vehicle body coordinate system, including:
and performing inverse perspective transformation on the recognition result of the lane line in the acquired image frame, and determining the position information of the lane line in the vehicle body coordinate system.
In one possible embodiment, the presence of a branch intersection in the acquired image frame is determined by:
and when the fact that the diversion line exists in the collected image frame and the lanes exist on the two sides of the diversion line is determined, determining that a branch road junction exists in the collected image frame.
In one possible embodiment, the second determining unit 530 determines the driving section and the non-driving section in the acquired image frame based on the position information of the lane line in the vehicle body coordinate system, including:
determining a lane where the target vehicle is located based on the position information of the lane line in the vehicle body coordinate system and the position information of the target vehicle in the vehicle body coordinate system;
determining a road on a first side of the diversion line as a driving road section and determining a road on a second side of the diversion line as a non-driving road section based on the relative position of the lane where the target vehicle is located and the diversion line; wherein the lane in which the target vehicle is located is on the first side of the diversion line.
In one possible embodiment, the traffic sign matching the travel section and the traffic sign matching the non-travel section are determined by:
for any traffic sign, mapping the center position of the traffic sign in the acquired image frame to a lane area to determine a lane matched with the traffic sign;
when the lane is the lane of the driving road section, determining that the traffic sign is matched with the driving road section;
and when the lane is the lane of the non-driving road section, determining that the traffic sign is matched with the non-driving road section.
In a possible embodiment, after the third determining unit 540 determines the traffic sign matching the traveling section as a valid traffic sign and determines the traffic sign matching the non-traveling section as an invalid traffic sign, the method further includes:
and performing vehicle running control based on the effective traffic sign.
In one possible embodiment, the third determining unit 540 performs vehicle driving control based on the valid traffic sign, including:
controlling the running speed of the target vehicle based on the speed limit sign in the effective traffic sign;
and/or the first and/or second light sources,
controlling the running of the target vehicle based on a weight limit sign in the effective traffic signs and the weight of the target vehicle;
and/or the first and/or second light sources,
controlling the running of the target vehicle based on a vehicle type limit mark in the effective traffic mark and the type of the target vehicle;
and/or the first and/or second light sources,
and controlling the running of the target vehicle based on a height limit mark in the effective traffic marks and the height of the target vehicle.
In a possible embodiment, the third determining unit 540 is further configured to, when a plurality of traffic signs of the same type exist in the acquired image frames, respectively determine lanes matched with the traffic signs in the plurality of traffic signs of the same type; and performing vehicle running control on the target vehicle based on the traffic sign matched with the lane where the target vehicle is located.
In a possible embodiment, the third determining unit 540 is specifically configured to map the center position of each traffic sign in the acquired image frame to a lane area, so as to obtain a corresponding mapping point;
when a plurality of mapping points are positioned in the same target lane, determining the traffic sign corresponding to the target mapping point as a traffic sign matched with the target lane; the target mapping point is the mapping point with the minimum distance to the center line position of the target lane in the plurality of mapping points;
and/or, the traffic sign corresponding to the mapping point on the right side of the target mapping point in the plurality of mapping points is determined as the traffic sign matched with the adjacent lane on the right side of the target lane.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device may include a processor 601, a memory 602 storing machine executable instructions. The processor 601 and the memory 602 may communicate via a system bus 603. Also, the processor 601 may perform the traffic sign identification method described above by reading and executing machine-executable instructions in the memory 602 corresponding to the encoded control logic.
The memory 602 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
In some embodiments, a machine-readable storage medium, such as the memory 602 in fig. 6, having stored therein machine-executable instructions that, when executed by a processor, implement the traffic sign recognition method described above is also provided. For example, the machine-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and so forth.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (11)

1. A traffic sign recognition method, comprising:
identifying a lane line in the collected image frame, and determining the position information of the lane line in a vehicle body coordinate system;
when the situation that a branch road junction exists in the collected image frame is determined, determining a driving road section and a non-driving road section in the collected image frame based on the position information of the lane line in the vehicle body coordinate system;
identifying a traffic sign in the acquired image frame and determining position information of the traffic sign in the acquired image frame;
and determining a traffic sign matched with the driving road section as a valid traffic sign and determining a traffic sign matched with the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame.
2. The method of claim 1, wherein the presence of a branch intersection in the acquired image frames is determined by:
and when the fact that the diversion line exists in the collected image frame and the lanes exist on the two sides of the diversion line is determined, determining that a branch road junction exists in the collected image frame.
3. The method of claim 2, wherein determining the driving section and the non-driving section in the acquired image frame based on the position information of the lane line in the vehicle body coordinate system comprises:
determining a lane where the target vehicle is located based on the position information of the lane line in the vehicle body coordinate system and the position information of the target vehicle in the vehicle body coordinate system;
determining a road on a first side of the diversion line as a driving road section and determining a road on a second side of the diversion line as a non-driving road section based on the relative position of the lane where the target vehicle is located and the diversion line; wherein the lane in which the target vehicle is located is on the first side of the diversion line.
4. The method of claim 1, wherein the traffic sign matching the travel segment and the traffic sign matching the non-travel segment are determined by:
for any traffic sign, mapping the center position of the traffic sign in the acquired image frame to a lane area to determine a lane matched with the traffic sign;
when the lane is the lane of the driving road section, determining that the traffic sign is matched with the driving road section;
and when the lane is the lane of the non-driving road section, determining that the traffic sign is matched with the non-driving road section.
5. The method according to any one of claims 1-4, wherein after determining the traffic sign matching the travel segment as a valid traffic sign and determining the traffic sign matching the non-travel segment as an invalid traffic sign, further comprising:
and performing vehicle running control based on the effective traffic sign.
6. The method of claim 5, wherein the vehicle travel control based on the active traffic sign comprises:
controlling the running speed of the target vehicle based on the speed limit sign in the effective traffic sign;
and/or the first and/or second light sources,
controlling the running of the target vehicle based on a weight limit sign in the effective traffic signs and the weight of the target vehicle;
and/or the first and/or second light sources,
controlling the running of the target vehicle based on a vehicle type limit mark in the effective traffic mark and the type of the target vehicle;
and/or the first and/or second light sources,
and controlling the running of the target vehicle based on a height limit mark in the effective traffic marks and the height of the target vehicle.
7. The method according to any one of claims 1-4, further comprising:
when a plurality of traffic signs of the same type exist in the acquired image frames, respectively determining lanes matched with the traffic signs in the plurality of traffic signs of the same type;
and performing vehicle running control on the target vehicle based on the traffic sign matched with the lane where the target vehicle is located.
8. The method of claim 7, wherein the determining the lane of the traffic sign of the same type includes:
respectively mapping the central position of each traffic sign in the acquired image frame to a lane area to obtain corresponding mapping points;
when a plurality of mapping points are positioned in the same target lane, determining the traffic sign corresponding to the target mapping point as a traffic sign matched with the target lane; the target mapping point is the mapping point with the minimum distance to the center line position of the target lane in the plurality of mapping points;
and/or, the traffic sign corresponding to the mapping point on the right side of the target mapping point in the plurality of mapping points is determined as the traffic sign matched with the adjacent lane on the right side of the target lane.
9. A traffic sign recognition apparatus, comprising:
the identification unit is used for identifying lane lines in the acquired image frames;
the first determining unit is used for determining the position information of the lane line in the vehicle body coordinate system;
the second determining unit is used for determining a driving road section and a non-driving road section in the collected image frames based on the position information of the lane line in the vehicle body coordinate system when the situation that the branch road junction exists in the collected image frames is determined;
the identification unit is also used for identifying the traffic signs in the acquired image frames;
the first determining unit is further used for determining the position information of the traffic sign in the acquired image frame;
and a third determining unit, configured to determine a traffic sign matching the driving road section as a valid traffic sign and determine a traffic sign matching the non-driving road section as an invalid traffic sign based on the driving road section and the non-driving road section in the acquired image frame and the position information of the traffic sign in the acquired image frame.
10. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor being configured to execute the machine executable instructions to implement the method of any one of claims 1 to 8.
11. A machine-readable storage medium having stored therein machine-executable instructions which, when executed by a processor, implement the method of any one of claims 1-8.
CN202010429989.3A 2020-05-20 2020-05-20 Traffic sign identification method and device and electronic equipment Pending CN113705273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010429989.3A CN113705273A (en) 2020-05-20 2020-05-20 Traffic sign identification method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010429989.3A CN113705273A (en) 2020-05-20 2020-05-20 Traffic sign identification method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113705273A true CN113705273A (en) 2021-11-26

Family

ID=78645506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010429989.3A Pending CN113705273A (en) 2020-05-20 2020-05-20 Traffic sign identification method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113705273A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376331A (en) * 2022-07-29 2022-11-22 中国第一汽车股份有限公司 Method and device for determining speed limit information and electronic equipment
CN115394077A (en) * 2022-08-18 2022-11-25 中国第一汽车股份有限公司 Speed limit information determining method and device and nonvolatile storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020085095A1 (en) * 2000-12-28 2002-07-04 Holger Janssen Method and device for producing road and street data for a digital map
CN105022985A (en) * 2014-04-25 2015-11-04 本田技研工业株式会社 Lane recognition device
CN107560622A (en) * 2016-07-01 2018-01-09 板牙信息科技(上海)有限公司 A kind of method and apparatus based on driving image-guidance
CN109671286A (en) * 2018-12-26 2019-04-23 东软睿驰汽车技术(沈阳)有限公司 A kind for the treatment of method and apparatus of road information and Traffic Information
CN109816980A (en) * 2019-02-20 2019-05-28 东软睿驰汽车技术(沈阳)有限公司 The method and relevant apparatus in lane locating for a kind of determining vehicle
CN111176338A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Navigation method, electronic device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020085095A1 (en) * 2000-12-28 2002-07-04 Holger Janssen Method and device for producing road and street data for a digital map
CN105022985A (en) * 2014-04-25 2015-11-04 本田技研工业株式会社 Lane recognition device
CN107560622A (en) * 2016-07-01 2018-01-09 板牙信息科技(上海)有限公司 A kind of method and apparatus based on driving image-guidance
CN109671286A (en) * 2018-12-26 2019-04-23 东软睿驰汽车技术(沈阳)有限公司 A kind for the treatment of method and apparatus of road information and Traffic Information
CN109816980A (en) * 2019-02-20 2019-05-28 东软睿驰汽车技术(沈阳)有限公司 The method and relevant apparatus in lane locating for a kind of determining vehicle
CN111176338A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Navigation method, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王敏;魏衡华;鲍远律;: "GPS导航系统中的地图匹配算法", 计算机工程, no. 14 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376331A (en) * 2022-07-29 2022-11-22 中国第一汽车股份有限公司 Method and device for determining speed limit information and electronic equipment
CN115394077A (en) * 2022-08-18 2022-11-25 中国第一汽车股份有限公司 Speed limit information determining method and device and nonvolatile storage medium
CN115394077B (en) * 2022-08-18 2023-10-27 中国第一汽车股份有限公司 Speed limit information determining method and device and nonvolatile storage medium

Similar Documents

Publication Publication Date Title
CN108021862B (en) Road sign recognition
CN111212772B (en) Method and device for determining a driving strategy of a vehicle
JP4680131B2 (en) Own vehicle position measuring device
US9815460B2 (en) Method and device for safe parking of a vehicle
US10969788B2 (en) ECU, autonomous vehicle including ECU, and method of determining driving lane for the same
US9360332B2 (en) Method for determining a course of a traffic lane for a vehicle
US11727692B2 (en) Detection of emergency vehicles
US20080123902A1 (en) Apparatus and method of estimating center line of intersection
CN103847640B (en) The method and apparatus that augmented reality is provided
JP6303362B2 (en) MAP MATCHING DEVICE AND NAVIGATION DEVICE HAVING THE SAME
JP4977218B2 (en) Self-vehicle position measurement device
CN113705273A (en) Traffic sign identification method and device and electronic equipment
JP4951481B2 (en) Road marking recognition device
JP2007183846A (en) Sign recognition apparatus for vehicle, and vehicle controller
KR101706455B1 (en) Road sign detection-based driving lane estimation method and apparatus
CN111231977B (en) Vehicle speed determination method and device, vehicle and storage medium
US20210383135A1 (en) Lane departure warning without lane lines
CN114973644A (en) Road information generating device
JP4234071B2 (en) Vehicle road surface sign detection device
KR20210034626A (en) How to determine the type of parking space
JP7449497B2 (en) Obstacle information acquisition system
EP4246456A1 (en) Stop line detection apparatus
JP7446445B2 (en) Image processing device, image processing method, and in-vehicle electronic control device
CN118160004A (en) Device and method for determining the distance of a lamp signal generator
JP2024011893A (en) Lane determination device, lane determination method, and lane determination computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination