CN112989900A - Method for accurately detecting traffic signs or marking lines - Google Patents

Method for accurately detecting traffic signs or marking lines Download PDF

Info

Publication number
CN112989900A
CN112989900A CN201911376005.3A CN201911376005A CN112989900A CN 112989900 A CN112989900 A CN 112989900A CN 201911376005 A CN201911376005 A CN 201911376005A CN 112989900 A CN112989900 A CN 112989900A
Authority
CN
China
Prior art keywords
key points
keypoints
traffic
detecting
traffic signs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911376005.3A
Other languages
Chinese (zh)
Inventor
杨奎元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Shendong Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shendong Technology Beijing Co ltd filed Critical Shendong Technology Beijing Co ltd
Publication of CN112989900A publication Critical patent/CN112989900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Abstract

The invention discloses a method for accurately detecting traffic signs or marking lines, which comprises the following steps: superposing and combining the images of the traffic signs or the marked lines to form new synthesized signs or marked lines; setting key points on the synthesized mark or marked line; and detecting the key points, and identifying traffic information indicated or prompted by the traffic signs or marked lines corresponding to the key points.

Description

Method for accurately detecting traffic signs or marking lines
Technical Field
The invention relates to the field of image detection, in particular to a method for accurately detecting traffic signs or marking lines.
Background
Guide arrows are a type of important traffic signs that are applied to the road surface to indicate the driving pattern of a complex road. For many fields including automatic driving, the guiding arrow is important information for guiding intelligent vehicles or automatic driving vehicles to correctly use roads, and therefore, the guiding arrow has wide application in a plurality of modules in the automatic driving field, including high-precision map construction, high-precision positioning, driving decision and the like.
In the existing template matching method, templates of each arrow are constructed, an image to be detected is matched with each template, and if the matching value exceeds a threshold value, the matching is determined to be successful. The technical disadvantage is that the problems of shading, illumination change, weather change, size change of actual guide arrows, different construction standards of different countries and regions cannot be robustly processed.
Disclosure of Invention
The invention provides a method for accurately detecting traffic signs or marked lines, aiming at solving the defects that the prior art cannot robustly process shading, illumination change, weather change, size change of actual guide arrows and different national and regional marking standards.
According to one aspect of the present invention, there is provided a method of accurately detecting a traffic sign or line, the method comprising the steps of:
superposing and combining the images of the traffic signs or the marked lines to form new synthesized signs or marked lines; setting key points on the synthesized mark or reticle; and detecting the key points, and identifying traffic information indicated or prompted by the traffic signs or marked lines corresponding to the key points.
Preferably, the traffic sign or marking is a guide arrow.
Preferably, the keypoints are identified by a reference numeral.
Preferably, the identifying the key points by the reference numbers comprises: the key points are marked by Arabic numerals or English letters or Chinese characters.
Preferably, the identifying the key points by the reference numbers comprises: the numbering is done in either a counterclockwise or clockwise or top-down order.
Preferably, the step of detecting the keypoints is implemented by a deep convolutional network.
Preferably, the deep convolutional network generates a key point score map and a key point feature map through two convolutional branches, respectively, where the score map is used to measure the probability that the detected position point has the key point, and the feature map is used to describe whether the key point corresponds to the same traffic sign or marking.
Preferably, the score map screens the detection result of the key point in a non-maximum inhibition manner.
Preferably, the step of detecting the key points includes: the effect of deep learning is optimized by annotating images of real traffic scenes containing the traffic signs or markings.
Compared with the prior art, the invention has the beneficial effects that:
storage can be reduced and calculation can be accelerated by combining the guide arrows, and the problems of shielding, size change of the actual guide arrows and different national and regional planning standards can be solved by representing the guide arrows by key points. Through supervised learning of a large amount of labeled data, the problems of illumination change, weather change and the like can be processed through the depth model.
Drawings
The foregoing summary, as well as the following detailed description, will be better understood when read in conjunction with the appended drawings. For the purpose of illustration, certain embodiments of the disclosure are shown in the drawings. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate implementations of systems and apparatus according to the invention and, together with the description, serve to explain the advantages and principles of the invention.
Wherein the content of the first and second substances,
FIG. 1 is a schematic flow diagram of a method of accurately detecting traffic signs or markings according to the present invention.
Fig. 2 is a schematic structural view of a guide arrow of the present invention.
FIG. 3 is a schematic diagram of the structure of the merged arrow according to the present invention.
FIG. 4 is a schematic diagram of the detection process of the deep convolutional network of the present invention.
FIG. 5 is a schematic diagram of another detection process for the deep convolutional network of the present invention.
Detailed Description
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The drawings and written description are provided to guide those skilled in the art in making and using the invention for which patent protection is sought. The invention is capable of other embodiments and of being practiced and carried out in various ways. Those skilled in the art will appreciate that not all features of a commercial embodiment are shown for the sake of clarity and understanding. Those skilled in the art will also appreciate that the development of an actual commercial embodiment incorporating aspects of the present inventions will require numerous implementation-specific decisions to achieve the developer's ultimate goal for the commercial embodiment. While these efforts may be complex and time consuming, these efforts will be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. For example, use of singular terms, such as "a," "an," and "the" is not intended to limit the number of items. Also, the use of relational terms, such as, but not limited to, "top," "bottom," "left," "right," "upper," "lower," "down," "up," "side," and the like are used in this description with specific reference to the figures for clarity and are not intended to limit the scope of the invention or the appended claims. Furthermore, it will be appreciated that any of the features of the present invention may be used alone, or in combination with other features. Other systems, methods, features and advantages of the invention will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
Referring to fig. 1-4, there is shown schematically an embodiment of a method for accurately detecting traffic signs or markings according to the present invention, comprising the following steps:
step S101: merging guide arrows
As shown in fig. 2, the guide arrows are divided into twelve kinds of guide arrows including: indicating straight going, indicating that the front can go straight or turn right, indicating that the front can go straight or turn left or turn right, indicating that the front road can only turn left or right, indicating that the front can turn left or turn around, indicating that the front can go straight or turn around, indicating that the front turns left, indicating that the front turns right, indicating that the front road has a left turn or needs to merge left, indicating that the front road has a right turn or needs to merge right.
The twelve guiding arrows are combined into a combined arrow by means of superposition and combination, wherein the combined arrow combines the repeated parts of the twelve guiding arrows, as shown in fig. 3.
Step S102: setting Key points of composite arrow
As shown in fig. 3, the resultant arrow contains twenty-six key points, numbered 1 through 26, respectively. Correspondingly, the twelve guiding arrows respectively comprise a plurality of key points, wherein the same key point reference numbers are used for common parts; like designations use like keypoint numerals. In particular, the key point labels used for prompting the front road to have a left bend or need to merge to the left and prompting the front road to have a right bend or need to merge to the right are the same as the key point labels used for indicating straight movement, and the different guide arrows can be distinguished by the projection positions of the point No. 1 on the connecting line of the points No. 11 and No. 12.
Step S103: detecting key points and identifying guide arrows
The detection, characterization and classification of the key points are realized based on the deep convolutional network, and the detection of the guide arrow is finally obtained, wherein the specific flow is shown in fig. 4.
Acquiring a monocular image through a monocular camera, extracting depth features through a depth convolution network, and respectively generating a key point value graph and a key point feature graph through two convolution branches. The key point score graph is used for measuring the possibility that corresponding key points exist in each position, the degree of the score represents the degree of the possibility, and the detection result of the key points is generated through screening. The key point score map is further screened in a non-maximum inhibition mode, and repeated detection of the same key point is avoided.
The key point feature map is used for describing each key point, the features of the key points belonging to the same arrow are similar, the features of the key points belonging to different arrows are dissimilar, and the association from the key points to the guide arrow is established, so that the detection of the guide arrow is realized. Specifically, according to the key point feature map, clustering is performed on the key points in the feature space, that is, the key points of the same guide arrow can be clustered together, so that detection and classification of the guide arrow is realized, that is, the position and the category of the guide arrow are output.
And finally, clustering the key points screened out by the key point score graph according to different characteristics by the key point characteristic graph to form a guide arrow.
In some embodiments, the depth learning can be performed by labeling tens or hundreds of thousands of actual scene images containing real guide arrows.
In some embodiments, learning of network parameters may be implemented by a cluster of GPUs.
In some embodiments, a top-down detection approach may be employed. Compared with the bottom-up detection method combining the key points with the guide arrows, the top-down detection method firstly detects the guide arrows by using the rectangular frame and then detects the key points of the guide arrows in the rectangular frame.
On the basis of the above-described embodiments, a further embodiment of a method for the precise detection of traffic signs or markings is shown according to the invention. The method comprises the following specific steps:
step S201: key point for setting guide arrow
The guide arrows are divided into twelve kinds of guide arrows, including: indicating straight going, indicating that the front can go straight or turn right, indicating that the front can go straight or turn left or turn right, indicating that the front road can only turn left or right, indicating that the front can turn left or turn around, indicating that the front can go straight or turn around, indicating that the front turns left, indicating that the front turns right, indicating that the front road has a left turn or needs to merge left, indicating that the front road has a right turn or needs to merge right.
According to the illustration in fig. 2, the twelve guiding arrows are provided with key point numbers, and in particular, as shown in fig. 2, each guiding arrow contains key points, wherein the same key point numbers are used for common parts. In particular, the key point labels used for two guide arrows for indicating a left bend or a left confluence on the road ahead and for indicating a right bend or a right confluence on the road ahead are the same as the key point labels for indicating a straight run.
In some embodiments, the guide arrows may be other sizes or shapes.
In some embodiments, the keypoint labels may be represented using other words, such as a, b, c, d instead of 1, 2, 3, 4, and one, two, three, four instead of 1, 2, 3, 4.
In some embodiments, the order of the keypoint labels may be in other orders or follow other logic, e.g., the order of the keypoint labels is counterclockwise.
In some embodiments, the sorting number of the guide arrows may be other numbers, for example, the guide arrows may be divided into any one of two to twelve guide arrows or more.
Step S202: merging guide arrows
The twelve guide arrows are combined into a combined arrow by means of superposition and combination as shown in fig. 3, wherein the combined arrow combines the repeated parts of the twelve guide arrows according to key points. As shown in fig. 3, the resultant arrow contains twenty-six key points, numbered 1 through 26, respectively.
In some embodiments, the composite arrow may be one of the guide arrows, or a combination of superposition and combination of several of the guide arrows.
In some embodiments, the composite arrow may be formed by stacking and merging two to twelve guiding arrows, or by stacking and merging more guiding arrows.
Step S203: detecting key points and identifying guide arrows
And realizing the detection, characterization and classification of the key points based on the deep convolutional network, and finally obtaining the detection of the guide arrows.
On the basis of the above-described embodiment, and with reference to fig. 5, another embodiment of a method for accurately detecting traffic signs or markings is shown according to the invention. The method comprises the following specific steps:
step S301: and merging the guide arrows, and particularly referring to the step S101.
Step S302: and setting key points of the synthesized arrow, and particularly referring to the step S102.
Step S303: detecting key points and identifying guide arrows
On the basis of step S103, two additional convolution branches are introduced to generate a center point score map and a center point feature map, respectively. The central point is the geometric center of the 2D frame of the actual road sign of the road surface, wherein the central point score graph is used for measuring the possibility that a corresponding central point exists at each position, and the score of the central point score graph represents the possibility, so that the central point detection result is used for screening and generating the detection result of the central point. And the central point score map needs to further screen the detection result of the central point in a non-maximum inhibition mode.
And comparing the similarity of the features of the central point with the features of the key points, and associating the key points with the corresponding central points to form the detection of the guide arrows when the similarity is greater than a preset threshold value. Because one guide arrow corresponds to one central point, the central point features integrally describe the guide arrows and have stronger distinguishability, thereby avoiding the error caused by the weak distinguishability of the calculation similarity among key points.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention in any way, and all modifications and equivalents of the above embodiments that may be made in accordance with the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A method of accurately detecting a traffic sign or line, comprising:
superposing and combining the images of the traffic signs or the marked lines to form new synthesized signs or marked lines;
setting key points on the synthesized mark or reticle;
and detecting the key points, and identifying traffic information indicated or prompted by the traffic signs or marked lines corresponding to the key points.
2. The method of claim 1, wherein the traffic sign or marking is a guide arrow.
3. The method of claim 1, wherein the keypoints are identified by a label.
4. The method of claim 3, wherein identifying the keypoints by labels comprises:
the key points are marked by Arabic numerals or English letters or Chinese characters.
5. The method of claim 3, wherein identifying the keypoints by labels comprises:
the numbering is done in either a counterclockwise or clockwise or top-down order.
6. The method of claim 1, wherein the step of detecting the keypoints is performed by a deep convolutional network.
7. The method of claim 6, wherein the deep convolutional network generates a key point score map and a key point feature map through two convolutional branches, respectively, wherein the score map is used for measuring the probability of the existence of the key point in the detected position point, and the feature map is used for describing whether the key points correspond to the same traffic sign or marking.
8. The method of claim 7, wherein the score map screens the keypoints for detection by non-maxima suppression.
9. The method of claim 1, wherein the step of detecting the keypoints comprises:
the effect of deep learning is optimized by annotating images of real traffic scenes containing the traffic signs or markings.
CN201911376005.3A 2019-12-13 2019-12-27 Method for accurately detecting traffic signs or marking lines Pending CN112989900A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019112852556 2019-12-13
CN201911285255 2019-12-13

Publications (1)

Publication Number Publication Date
CN112989900A true CN112989900A (en) 2021-06-18

Family

ID=76344189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911376005.3A Pending CN112989900A (en) 2019-12-13 2019-12-27 Method for accurately detecting traffic signs or marking lines

Country Status (1)

Country Link
CN (1) CN112989900A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799859A (en) * 2012-06-20 2012-11-28 北京交通大学 Method for identifying traffic sign
KR101409340B1 (en) * 2013-03-13 2014-06-20 숭실대학교산학협력단 Method for traffic sign recognition and system thereof
CN104819724A (en) * 2015-03-02 2015-08-05 北京理工大学 Unmanned ground vehicle self-driving assisting system based on GIS
CN105144260A (en) * 2012-11-20 2015-12-09 罗伯特·博世有限公司 Method and device for detecting variable-message signs
CN108140235A (en) * 2015-10-14 2018-06-08 高通股份有限公司 For generating the system and method that image vision is shown
WO2018104563A2 (en) * 2016-12-09 2018-06-14 Tomtom Global Content B.V. Method and system for video-based positioning and mapping
CN108710826A (en) * 2018-04-13 2018-10-26 燕山大学 A kind of traffic sign deep learning mode identification method
CN108711298A (en) * 2018-05-20 2018-10-26 福州市极化律网络科技有限公司 A kind of mixed reality road display method
CN109815836A (en) * 2018-12-29 2019-05-28 江苏集萃智能制造技术研究所有限公司 A kind of urban road surfaces guiding arrow detection recognition method
CN110135307A (en) * 2019-04-30 2019-08-16 北京邮电大学 Method for traffic sign detection and device based on attention mechanism
CN110298262A (en) * 2019-06-06 2019-10-01 华为技术有限公司 Object identification method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799859A (en) * 2012-06-20 2012-11-28 北京交通大学 Method for identifying traffic sign
CN105144260A (en) * 2012-11-20 2015-12-09 罗伯特·博世有限公司 Method and device for detecting variable-message signs
KR101409340B1 (en) * 2013-03-13 2014-06-20 숭실대학교산학협력단 Method for traffic sign recognition and system thereof
CN104819724A (en) * 2015-03-02 2015-08-05 北京理工大学 Unmanned ground vehicle self-driving assisting system based on GIS
CN108140235A (en) * 2015-10-14 2018-06-08 高通股份有限公司 For generating the system and method that image vision is shown
WO2018104563A2 (en) * 2016-12-09 2018-06-14 Tomtom Global Content B.V. Method and system for video-based positioning and mapping
CN108710826A (en) * 2018-04-13 2018-10-26 燕山大学 A kind of traffic sign deep learning mode identification method
CN108711298A (en) * 2018-05-20 2018-10-26 福州市极化律网络科技有限公司 A kind of mixed reality road display method
CN109815836A (en) * 2018-12-29 2019-05-28 江苏集萃智能制造技术研究所有限公司 A kind of urban road surfaces guiding arrow detection recognition method
CN110135307A (en) * 2019-04-30 2019-08-16 北京邮电大学 Method for traffic sign detection and device based on attention mechanism
CN110298262A (en) * 2019-06-06 2019-10-01 华为技术有限公司 Object identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
屈治华;邵毅明;邓天民;朱杰;宋晓华;: "复杂光照条件下的交通标志检测与识别", 激光与光电子学进展, no. 23 *

Similar Documents

Publication Publication Date Title
CN109308476B (en) Billing information processing method, system and computer readable storage medium
CN106295646A (en) A kind of registration number character dividing method based on degree of depth study and device
CN104197897B (en) A kind of downtown roads graticule automatic classification method based on Vehicle-borne Laser Scanning point cloud
CN103049750B (en) Character identifying method
CN108921166A (en) Medical bill class text detection recognition method and system based on deep neural network
CN110501018A (en) A kind of traffic mark board information collecting method for serving high-precision map producing
CN110532855B (en) Natural scene certificate image character recognition method based on deep learning
CN105868758A (en) Method and device for detecting text area in image and electronic device
US20070110322A1 (en) System and method for detecting text in real-world color images
CN112016605B (en) Target detection method based on corner alignment and boundary matching of bounding box
CN109117814B (en) Image processing method, image processing apparatus, electronic device, and medium
CN102024144A (en) Container number identification method
CN108921152A (en) English character cutting method and device based on object detection network
CN108875803A (en) A kind of detection of harmful influence haulage vehicle and recognition methods based on video image
US20130259367A1 (en) Method for Detecting and Recognising an Object in an Image, and an Apparatus and a Computer Program Therefor
CN110334709A (en) Detection method of license plate based on end-to-end multitask deep learning
CN111488854A (en) Automatic identification and classification method for road traffic signs
CN112052845A (en) Image recognition method, device, equipment and storage medium
Almutairy et al. Arts: Automotive repository of traffic signs for the united states
CN104268552A (en) Fine category classification method based on component polygons
Barba-Guamán et al. Detection of the characters from the license plates by cascade classifiers method
CN113033497A (en) Lane line recognition method, device, equipment and computer-readable storage medium
Hu Intelligent road sign inventory (IRSI) with image recognition and attribute computation from video log
WO2018042208A1 (en) Street asset mapping
CN114763999A (en) Map generation device, map generation method, and computer program for map generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220727

Address after: Room 618, 6 / F, building 5, courtyard 15, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Applicant after: Xiaomi Automobile Technology Co.,Ltd.

Address before: 1219, floor 11, SOHO, Zhongguancun, No. 8, Haidian North Second Street, Haidian District, Beijing 100089

Applicant before: SHENDONG TECHNOLOGY (BEIJING) Co.,Ltd.

TA01 Transfer of patent application right