CN113409393B - Method and device for identifying traffic sign - Google Patents

Method and device for identifying traffic sign Download PDF

Info

Publication number
CN113409393B
CN113409393B CN202110748454.7A CN202110748454A CN113409393B CN 113409393 B CN113409393 B CN 113409393B CN 202110748454 A CN202110748454 A CN 202110748454A CN 113409393 B CN113409393 B CN 113409393B
Authority
CN
China
Prior art keywords
traffic sign
sample
image
position information
selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110748454.7A
Other languages
Chinese (zh)
Other versions
CN113409393A (en
Inventor
段旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110748454.7A priority Critical patent/CN113409393B/en
Publication of CN113409393A publication Critical patent/CN113409393A/en
Application granted granted Critical
Publication of CN113409393B publication Critical patent/CN113409393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for identifying traffic signboards. One embodiment of the method comprises the following steps: scaling the image to be processed according to a set proportion to obtain at least one scaled image to be processed; for the zoom to-be-processed image in the at least one zoom to-be-processed image, importing the zoom to-be-processed image into a pre-trained traffic sign position recognition model to obtain position information of a traffic sign board in the zoom to-be-processed image, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign board of the zoom to-be-processed image through at least one position sliding window; and fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed. This embodiment improves the accuracy of identifying traffic signs.

Description

Method and device for identifying traffic sign
The application relates to a method and a device for identifying traffic signboards, which are applied separately, wherein the application date of the original application is 2019, 05, 17, 201910412771.4 and the application of the original application is as follows: a method and apparatus for identifying traffic signs.
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to a method and a device for identifying traffic signboards.
Background
With the rapid development of intelligent automobiles and unmanned technologies, the identification of traffic sign detection becomes an important component of safe driving. The intelligent automobile can acquire an image containing the traffic sign board, the traffic sign board is identified from the image, and then unmanned driving of the intelligent automobile is realized according to the traffic sign board.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for identifying traffic signboards.
In a first aspect, embodiments of the present disclosure provide a method for identifying traffic signs, the method comprising: scaling the image to be processed according to a set proportion to obtain at least one scaled image to be processed; for the zoom to-be-processed image in the at least one zoom to-be-processed image, importing the zoom to-be-processed image into a pre-trained traffic sign position recognition model to obtain position information of a traffic sign board in the zoom to-be-processed image, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign board of the zoom to-be-processed image through at least one position sliding window; and fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed.
In some embodiments, the traffic sign position recognition model is obtained through training by the following steps: acquiring a plurality of sample images and sample position information of a traffic sign board corresponding to each sample image in the plurality of sample images; and taking each sample image of the plurality of sample images as an input, taking the sample position information corresponding to each sample image of the plurality of sample images as an output, and training to obtain the traffic sign position recognition model.
In some embodiments, the training to obtain the traffic sign position recognition model includes: the following training steps are performed: and sequentially inputting each sample image in the plurality of sample images into an initial traffic sign position recognition model to obtain predicted position information corresponding to each sample image in the plurality of sample images, comparing the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the prediction accuracy of the initial traffic sign position recognition model, determining whether the prediction accuracy is greater than a preset accuracy threshold, and taking the initial traffic sign position recognition model as a trained traffic sign position recognition model if the prediction accuracy is greater than the preset accuracy threshold.
In some embodiments, the training to obtain the traffic sign position recognition model includes: and adjusting parameters of the initial traffic sign position recognition model in response to the fact that the initial traffic sign position recognition model is not larger than the preset accuracy threshold, and continuing to execute the training step.
In some embodiments, the sample position information is obtained by: carrying out image selection on the sample image through a sliding window to obtain a traffic sign board selection image set; calculating a selection accuracy value of a traffic sign board selection image in the traffic sign board selection image set, wherein the selection accuracy value is used for representing the ratio of intersection and union between pixels belonging to the traffic sign board in the sample image and all pixels of the traffic sign board in the sample image in the traffic sign board selection image; setting a traffic sign board selection image with the selection accuracy value being greater than or equal to a set threshold value as a positive sample traffic sign board selection image; and fusing all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some embodiments, the fusing all the positive sample traffic sign board selection images to obtain the sample position information of the sample image includes: and carrying out feature extraction on the positions of the traffic signs in the positive sample traffic sign plate selection images through a preset position sliding window for the positive sample traffic sign plate selection images to obtain initial position information of the traffic signs in the positive sample traffic sign plate selection images, wherein the position sliding window comprises at least one of the following components: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window; and fusing all initial position information corresponding to all positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some embodiments, the characteristic information includes at least one of: color feature information, shape feature information, texture feature information. And fusing at least one piece of characteristic information corresponding to the at least one scaled image to be processed, and determining final position information of the traffic sign in the image to be processed, wherein the method comprises the following steps: performing dimension reduction operation on the fused characteristic information to obtain dimension reduction characteristic information; fitting the dimension reduction characteristic information to obtain final position information of the traffic sign board in the image to be processed.
In a second aspect, embodiments of the present disclosure provide an apparatus for identifying traffic signs, the apparatus comprising: the scaling image obtaining unit is configured to scale the image to be processed according to a set proportion to obtain at least one scaling image to be processed; the characteristic information acquisition unit is used for importing the zoom to-be-processed image into a pre-trained traffic sign position identification model to obtain position information of traffic signs in the zoom to-be-processed image, and extracting characteristic information at an image position corresponding to the position information, wherein the traffic sign position identification model is used for identifying the position information of the traffic sign of the zoom to-be-processed image through at least one position sliding window; the traffic sign position identification unit is configured to fuse at least one piece of characteristic information corresponding to the at least one scaled image to be processed and determine final position information of the traffic sign in the image to be processed.
In some embodiments, the apparatus includes a traffic sign position recognition model training unit configured to train a traffic sign position recognition model, the traffic sign position recognition model training unit including: a sample information obtaining subunit configured to obtain a plurality of sample images and sample position information of a traffic sign board corresponding to each of the plurality of sample images; and a traffic sign position recognition model training subunit configured to train to obtain the traffic sign position recognition model by taking as input each of the plurality of sample images and taking as output the sample position information corresponding to each of the plurality of sample images.
In some embodiments, the traffic sign position recognition model training subunit includes: the traffic sign position recognition model training module is configured to sequentially input each sample image in the plurality of sample images into an initial traffic sign position recognition model to obtain prediction position information corresponding to each sample image in the plurality of sample images, compare the prediction position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the prediction accuracy of the initial traffic sign position recognition model, determine whether the prediction accuracy is greater than a preset accuracy threshold, and if so, use the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In some embodiments, the traffic sign position recognition model training subunit includes: and the parameter adjustment module is used for responding to the condition that the parameter is not larger than the preset accuracy threshold value, and is configured to adjust the parameter of the initial traffic sign position recognition model and returning to the traffic sign position recognition model training module.
In some embodiments, the apparatus further includes a sample position information acquisition unit configured to acquire sample position information, the sample position information acquisition unit including: the traffic sign board selection image set acquisition subunit is configured to perform image selection on the sample image through a sliding window to obtain a traffic sign board selection image set; a selection accuracy value calculating subunit configured to calculate a selection accuracy value of a traffic sign board selection image in the traffic sign board selection image set, where the selection accuracy value is used to characterize a ratio of an intersection between pixels belonging to the traffic sign board in the sample image and all pixels of the traffic sign board in the sample image in the traffic sign board selection image to a union; a positive sample selection subunit configured to set a traffic-sign-panel selection image having a selection accuracy value equal to or greater than a set threshold value as a positive sample traffic-sign-panel selection image; and the sample position information acquisition subunit is configured to fuse all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some embodiments, the sample position information acquiring subunit includes: the feature extraction module is configured to perform feature extraction on the positions of the traffic signs in the positive sample traffic sign selection image through a preset position sliding window for the positive sample traffic sign selection image to obtain initial position information of the traffic signs in the positive sample traffic sign selection image, wherein the position sliding window comprises at least one of the following: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window; and the sample position information acquisition module is configured to fuse all initial position information corresponding to all positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some embodiments, the characteristic information includes at least one of: color feature information, shape feature information, texture feature information. And, the above-mentioned traffic sign license plate position identification unit includes: the dimension reduction subunit is configured to perform dimension reduction operation on the fused characteristic information to obtain dimension reduction characteristic information; and the final position information acquisition subunit is configured to fit the dimension reduction characteristic information to obtain final position information of the traffic sign in the image to be processed.
In a third aspect, embodiments of the present disclosure provide an electronic device, comprising: one or more processors; and a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the method for identifying traffic signs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, characterized in that the program, when executed by a processor, implements the method for identifying traffic signs of the first aspect described above.
The method and the device for identifying the traffic sign board provided by the embodiment of the disclosure firstly scale the image to be processed according to a set proportion to obtain at least one scaled image to be processed; then, importing the image to be processed into a pre-trained traffic sign position recognition model to obtain position information of the traffic sign in the corresponding image to be processed, and extracting feature information from the image position corresponding to the position information; and finally, fusing at least one piece of characteristic information corresponding to at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed. The application improves the accuracy of identifying the traffic sign board.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method for identifying traffic signs according to the present disclosure;
FIG. 3 is a schematic illustration of one application scenario of a method for identifying traffic signs according to the present disclosure;
FIG. 4 is a flow chart of one embodiment of a traffic sign position recognition model training method according to the present disclosure;
FIG. 5 is a schematic structural view of one embodiment of an apparatus for identifying traffic signs according to the present disclosure;
fig. 6 is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which a method for identifying traffic signs or an apparatus for identifying traffic signs of embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include vehicles 101, 102, 103, a network 104, and a server 105. The network 104 is the medium used to provide communication links between the vehicles 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The vehicles 101, 102, 103 interact with the server 105 through the network 104 to receive or send messages or the like. Various data processing applications may be installed on the vehicles 101, 102, 103, such as image acquisition applications, traffic light identification applications, data transmission applications, alert applications, and the like.
The vehicles 101, 102, 103 may be various vehicles having a plurality of data acquisition units and data processing units, including, but not limited to, unmanned vehicles, manned vehicles, electric vehicles, hybrid electric vehicles, internal combustion engine vehicles, and the like.
The server 105 may be a server that provides various services, such as a server that performs image processing on the image to be processed including traffic lights sent from the vehicles 101, 102, 103. The server may perform analysis or the like on the received data such as the image to be processed, and feed back the processing result (e.g., the position information of the traffic sign) to the vehicles 101, 102, 103.
It should be noted that the method for identifying a traffic sign provided by the embodiment of the present disclosure may be performed by the vehicles 101, 102, 103 alone or may be performed by the vehicles 101, 102, 103 and the server 105 together. Accordingly, the means for identifying traffic signs may be provided in the vehicles 101, 102, 103 or in the server 105.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, to provide a distributed service), or may be implemented as a single software or software module, which is not specifically limited herein.
It should be understood that the number of vehicles, networks, and servers in fig. 1 are merely illustrative. There may be any number of vehicles, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for identifying traffic signs according to the present disclosure is shown. The method for identifying the traffic sign comprises the following steps:
step 201, scaling the image to be processed according to a set proportion to obtain at least one scaled image to be processed.
In the present embodiment, the execution subject of the method for identifying a traffic sign (e.g., the vehicles 101, 102, 103 and/or the server 105 shown in fig. 1) may acquire the image to be processed by wired connection or wireless connection. The image to be processed may be a road image including a traffic sign. The images to be processed may be acquired by cameras on the vehicles 101, 102, 103 or may be received from other terminal devices (e.g. traffic monitoring lenses). It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
In practice, traffic signs are of various shapes. When the prior art is used for identifying the traffic sign board, the traffic sign board is easily influenced by various environmental factors, generally has a good identification effect on the traffic sign board with a specified shape, and has low identification accuracy on the traffic sign boards with other shapes.
In order to improve accuracy of identifying traffic signs, after the execution body acquires the image to be processed, the execution body can scale the image to be processed according to a set proportion to obtain at least one scaled image to be processed.
Step 202, for the zoom to-be-processed image in the at least one zoom to-be-processed image, importing the zoom to-be-processed image into a pre-trained traffic sign position recognition model to obtain position information of the traffic sign corresponding to the zoom to-be-processed image, and extracting feature information at an image position corresponding to the position information.
The execution subject can guide the image to be processed into a pre-trained traffic sign position recognition model to obtain the position information of the traffic sign corresponding to the image to be processed. The position information may be represented by coordinate values of the traffic sign on the scaled image to be processed. The traffic sign position recognition model can be used for recognizing and scaling position information of the image traffic sign to be processed through at least one position sliding window. Wherein the location information may be used to mark the structure of the traffic sign. Thereafter, the execution subject may extract the feature information at the image position corresponding to the position information. Similarly, the characteristic information can characterize structural features of the traffic sign.
In some optional implementations of this embodiment, the traffic sign position recognition model is obtained by training the following steps:
the method comprises the steps of obtaining a plurality of sample images and sample position information of traffic sign plates corresponding to each sample image in the plurality of sample images.
When training the traffic sign position recognition model, the execution subject may first acquire a sample image and sample position information corresponding to the sample image. Wherein the sample image comprises a traffic sign image; the sample position information described above may be used to mark the shape of a traffic sign.
And a second step of obtaining the traffic sign position recognition model by taking each sample image of the plurality of sample images as an input, and taking the sample position information corresponding to each sample image of the plurality of sample images as an output.
The executing entity may train the traffic sign position recognition model through a variety of networks (e.g., may be convolutional neural networks, deep learning networks, etc.). The execution subject can take the sample image as network input, take sample position information corresponding to the sample image as network output, and train to obtain the traffic sign position recognition model.
In some optional implementations of the present embodiment, the sample position information may be obtained by:
the first step, the sample image is selected through a sliding window, and a traffic sign board selection image set is obtained.
The execution subject can select the images on the sample images through the sliding window to obtain a traffic sign board selection image set corresponding to the sample images.
And secondly, calculating the selection accuracy value of the traffic sign board selection image in the traffic sign board selection image set.
The traffic sign selection image may include all or a part of the traffic sign image. To characterize the accuracy between the image of the traffic sign within the traffic sign selection image and the image of the actual traffic sign in the sample image. The executing body may calculate a selection accuracy value of the traffic sign selection image. The selection accuracy value is used for representing the ratio of intersection and union between pixels belonging to the traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image. The selection accuracy value can also be calculated according to the percentage between the pixels belonging to the traffic sign in the sample image and all the pixels in the traffic sign selection image, and the like, and the selection accuracy value is determined according to actual needs.
And thirdly, setting the traffic sign board selection image with the selection accuracy value being greater than or equal to the set threshold value as a positive sample traffic sign board selection image.
Thereafter, the executing body may screen the traffic sign selection image in accordance with the selection accuracy value. The execution body may set the traffic-sign-panel selection image whose selection accuracy value is equal to or greater than the set threshold value as the positive-sample traffic-sign-panel selection image.
And fourthly, fusing all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
The positive sample traffic sign selection image contains more traffic sign information. The execution body can fuse the positive sample traffic sign board selection images according to the traffic sign board information contained in the positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some optional implementations of this embodiment, the fusing all the positive sample traffic sign selection images to obtain the sample position information of the sample image may include the following steps:
the first step, for all positive sample traffic sign board selection images, the positions of the traffic signs in the positive sample traffic sign board selection images are subjected to feature extraction through a preset position sliding window, and initial position information of the traffic signs in the positive sample traffic sign board selection images is obtained.
As is clear from the above description, the shape of a traffic sign in practice varies. In order to obtain the position information of the traffic sign as accurately as possible, the executing body can perform feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window to obtain the initial position information of the traffic sign in the positive sample traffic sign selection image. Wherein, the position sliding window may include at least one of the following: a nine-grid sliding window, a six-grid sliding window (for example, two rows and three columns of six grids, three rows and two columns of six grids), and a four-grid sliding window. In addition, the position sliding window may be another type of window (for example, a regular hexagonal sliding window, a triangular sliding window), which will not be described in detail herein. The initial position information may be represented by coordinates of the traffic sign in the sample image in the positive sample traffic sign selection image, or the like.
And secondly, fusing all initial position information corresponding to all positive sample traffic sign board selection images to obtain sample position information of the sample images.
After obtaining the initial position information, the executing body may fuse the initial position information in various manners (for example, may fit coordinate points of traffic sign boards corresponding to the initial position information, etc.), to obtain sample position information of the sample image.
And 203, fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed.
After the feature information is obtained, the executing body can fuse the feature information to obtain the final position information of the traffic sign. Because the characteristic information is obtained based on the position information, the fused characteristic information can represent the accurate position of the traffic sign. After the final position information of the traffic sign is obtained, the accurate identification of the traffic sign can be realized.
In some optional implementations of this embodiment, the fusing at least one feature information corresponding to the at least one scaled image to be processed to determine final location information of the traffic sign in the image to be processed may include the following steps:
and firstly, performing dimension reduction operation on the fused characteristic information to obtain dimension reduction characteristic information.
The characteristic information may include at least one of: color feature information, shape feature information, texture feature information, and the like. Therefore, the information amount of the feature information after fusion is large. In order to rapidly perform data processing, the execution body can perform dimension reduction operation on the fused feature information to obtain dimension reduction feature information. The dimension reduction operation can be used for extracting representative characteristic information in the fused characteristic information to obtain dimension reduction characteristic information. Specifically, the dimension reduction operation may include: deleting the characteristic information with the quantity smaller than the set quantity; and selecting one piece of characteristic information from a large number of same or similar characteristic information, and deleting other same or similar characteristic information. The dimension reduction operation may be other ways, and will not be described in detail herein.
And secondly, fitting the dimension reduction characteristic information to obtain final position information of the traffic sign board in the image to be processed.
The executing body can fit the dimension reduction characteristic information to obtain the final position information of the traffic sign board in the image to be processed. Therefore, the method can realize quick processing of data, and can acquire accurate position information of the traffic sign board, so that the efficiency of identifying the traffic sign board is improved.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for identifying traffic signboards according to the present embodiment. In the application scenario of fig. 3, the vehicle may first scale the image to be processed in fig. 3 according to a set proportion, to obtain at least one scaled image to be processed; then, the vehicle can guide each image to be processed into a pre-trained traffic sign position recognition model to obtain the position information of the traffic sign in the corresponding image to be processed, and extract the characteristic information at the image position corresponding to the position information; finally, the vehicle fuses all the obtained characteristic information to obtain the final position information of the traffic sign (shown by a dotted line box in fig. 3).
The method provided by the embodiment of the disclosure includes the steps of firstly scaling an image to be processed according to a set proportion to obtain at least one scaled image to be processed; then, importing the image to be processed into a pre-trained traffic sign position recognition model to obtain position information of the traffic sign in the corresponding image to be processed, and extracting feature information from the image position corresponding to the position information; and finally, fusing at least one piece of characteristic information corresponding to at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed. The application improves the accuracy of identifying the traffic sign board.
With further reference to FIG. 4, a flow 400 of one embodiment of a traffic sign position identification model training method is shown. The process 400 of the traffic sign position recognition model training method comprises the following steps:
step 401, acquiring a plurality of sample images and sample position information of a traffic sign board corresponding to each sample image in the plurality of sample images.
In this embodiment, the execution subject (e.g., the server 105 shown in fig. 1) of the traffic sign position recognition model training method may acquire a plurality of sample images and sample position information corresponding to each of the plurality of sample images.
Step 402, sequentially inputting each sample image in the plurality of sample images to an initial traffic sign position recognition model to obtain predicted position information corresponding to each sample image in the plurality of sample images.
In this embodiment, based on the plurality of sample images acquired in step 401, the execution subject may sequentially input each of the plurality of sample images to the initial traffic sign position recognition model, thereby obtaining predicted position information corresponding to each of the plurality of sample images. Here, the execution subject may input each sample image from the input side of the initial traffic sign position recognition model, sequentially pass through the processing of the parameters of each layer in the initial traffic sign position recognition model, and output from the output side of the initial traffic sign position recognition model, where the information output from the output side is the predicted position information corresponding to the sample image. The initial traffic sign position recognition model can be an untrained model (such as a deep learning model) or an untrained model, each layer of the initial traffic sign position recognition model is provided with an initialization parameter, and the initialization parameters can be continuously adjusted in the training process of the model.
And step 403, comparing the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the prediction accuracy of the initial traffic sign position recognition model.
Based on the predicted position information corresponding to each of the plurality of sample images obtained in step 402, the execution subject may compare the predicted position information corresponding to each of the plurality of sample images with the sample position information corresponding to the sample image, thereby obtaining the prediction accuracy of the initial traffic sign position recognition model. Specifically, if the predicted position information corresponding to one sample image is the same as or similar to the sample position information corresponding to the sample image, the initial traffic sign position recognition model is predicted correctly; if the predicted position information corresponding to one sample image is different or not similar to the sample position information corresponding to the sample image, the initial traffic sign position identification model is mispredicted. Here, the execution subject may calculate a ratio of the number of prediction correctness to the total number of samples, and take the ratio as the prediction accuracy of the initial traffic sign position recognition model.
Step 404, determining whether the prediction accuracy is greater than a preset accuracy threshold.
Based on the prediction accuracy of the initial traffic sign position recognition model obtained in step 403, the executing body may compare the prediction accuracy of the initial traffic sign position recognition model with a preset accuracy threshold. If the accuracy is greater than the preset accuracy threshold, step 405 is executed; if not, step 406 is performed.
And 405, taking the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In this embodiment, when the prediction accuracy of the initial traffic sign position recognition model is greater than the preset accuracy threshold, it is indicated that training of the initial traffic sign position recognition model is completed, and at this time, the executing body may use the initial traffic sign position recognition model as the trained traffic sign position recognition model.
And step 406, adjusting parameters of the initial traffic sign position identification model.
In this embodiment, under the condition that the prediction accuracy of the initial traffic sign position recognition model is not greater than the preset accuracy threshold, the executing body may adjust parameters of the initial traffic sign position recognition model, and return to the executing step 402 until the traffic sign position recognition model capable of acquiring accurate position information is trained.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an apparatus for identifying traffic signs, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for identifying traffic sign of the present embodiment may include: a scaling pending image acquisition unit 501, a feature information acquisition unit 502, and a traffic sign position recognition unit 503. Wherein the scaling pending image obtaining unit 501 is configured to scale the pending image according to a set proportion to obtain at least one scaling pending image; a feature information obtaining unit 502, configured to, for a zoom to-be-processed image in the at least one zoom to-be-processed image, import the zoom to-be-processed image into a pre-trained traffic sign position recognition model to obtain position information of a traffic sign board corresponding to the zoom to-be-processed image, and extract feature information at an image position corresponding to the position information, where the traffic sign board position recognition model is used for recognizing the position information of the traffic sign board of the zoom to-be-processed image through at least one position sliding window; the traffic sign position identifying unit 503 is configured to fuse at least one feature information corresponding to the at least one scaled image to be processed, and determine final position information of the traffic sign in the image to be processed.
In some optional implementations of this embodiment, the apparatus 500 for identifying a traffic sign includes a traffic sign position identification model training unit (not shown in the figure) configured to train a traffic sign position identification model, where the traffic sign position identification model training unit includes: a sample information acquisition subunit (not shown) and a traffic sign position recognition model training subunit (not shown). The sample information acquisition subunit is configured to acquire a plurality of sample images and sample position information of the traffic sign board corresponding to each sample image in the plurality of sample images; and a traffic sign position recognition model training subunit configured to train to obtain the traffic sign position recognition model by taking as input each of the plurality of sample images and taking as output the sample position information corresponding to each of the plurality of sample images.
In some optional implementations of this embodiment, the traffic sign position recognition model training subunit includes: the traffic sign position recognition model training module (not shown in the figure) is configured to sequentially input each sample image in the plurality of sample images into an initial traffic sign position recognition model to obtain predicted position information corresponding to each sample image in the plurality of sample images, compare the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the prediction accuracy of the initial traffic sign position recognition model, determine whether the prediction accuracy is greater than a preset accuracy threshold, and if so, use the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In some optional implementations of this embodiment, the traffic sign position recognition model training subunit may include: a parameter adjustment module (not shown) is configured to adjust parameters of the initial traffic sign position recognition model in response to not greater than the preset accuracy threshold and return to the traffic sign position recognition model training module.
In some optional implementations of the present embodiment, the apparatus 500 for identifying a traffic sign may further include a sample position information acquiring unit (not shown in the drawings) configured to acquire sample position information, and the sample position information acquiring unit may include: a traffic sign board selection image set acquisition subunit (not shown in the figure), a selection accuracy value calculation subunit (not shown in the figure), a positive sample selection subunit (not shown in the figure), and a sample position information acquisition subunit (not shown in the figure). The traffic sign board selection image collection acquisition subunit is configured to perform image selection on the sample images through a sliding window to obtain a traffic sign board selection image collection; a selection accuracy value calculating subunit configured to calculate a selection accuracy value of a traffic sign board selection image in the traffic sign board selection image set, where the selection accuracy value is used to characterize a ratio of an intersection between pixels belonging to the traffic sign board in the sample image and all pixels of the traffic sign board in the sample image in the traffic sign board selection image to a union; a positive sample selection subunit configured to set a traffic-sign-panel selection image having a selection accuracy value equal to or greater than a set threshold value as a positive sample traffic-sign-panel selection image; and the sample position information acquisition subunit is configured to fuse all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some optional implementations of this embodiment, the sample position information obtaining subunit may include: a feature extraction module (not shown) and a sample position information acquisition module (not shown). The feature extraction module is configured to perform feature extraction on the positions of the traffic signs in the positive sample traffic sign plate selection image through a preset position sliding window for the positive sample traffic sign plate selection image, so as to obtain initial position information of the traffic signs in the positive sample traffic sign plate selection image, wherein the position sliding window comprises at least one of the following components: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window; and the sample position information acquisition module is configured to fuse all initial position information corresponding to all positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some optional implementations of this embodiment, the characteristic information includes at least one of: color feature information, shape feature information, texture feature information. And the above-described traffic sign position recognition unit 503 may include: a dimension reduction subunit (not shown) and a final position information acquisition subunit (not shown). The dimension reduction subunit is configured to perform dimension reduction operation on the fused characteristic information to obtain dimension reduction characteristic information; the final position information acquisition subunit is configured to fit the dimension reduction characteristic information to obtain final position information of the traffic sign in the image to be processed.
The embodiment also provides an electronic device, including: one or more processors; and a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the method for identifying traffic signs.
The present embodiment also provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the above-described method for identifying traffic signs.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use with an electronic device (e.g., server 105 of FIG. 1) implementing embodiments of the present disclosure. The electronic device shown in fig. 6 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601.
It should be noted that, the above-mentioned computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the above-mentioned two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: scaling the image to be processed according to a set proportion to obtain at least one scaled image to be processed; for the zoom to-be-processed image in the at least one zoom to-be-processed image, importing the zoom to-be-processed image into a pre-trained traffic sign position recognition model to obtain position information of a traffic sign board in the zoom to-be-processed image, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign board of the zoom to-be-processed image through at least one position sliding window; and fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a scaled pending image acquisition unit, a feature information acquisition unit, and a traffic sign location identification unit. The names of these units do not constitute limitations on the unit itself in some cases, and for example, the traffic sign position recognition unit may also be described as "a unit for obtaining position information of a traffic sign from characteristic information".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention referred to in this disclosure is not limited to the specific combination of features described above, but encompasses other embodiments in which features described above or their equivalents may be combined in any way without departing from the spirit of the invention. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (12)

1. A method for identifying traffic signs, comprising:
scaling the image to be processed according to a set proportion to obtain at least one scaled image to be processed;
for the scaling to-be-processed image in the at least one scaling to-be-processed image, importing the scaling to-be-processed image into a pre-trained traffic sign position recognition model to obtain position information of a traffic sign board in the scaling to-be-processed image, and extracting feature information at an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign board of the scaling to-be-processed image through at least one position sliding window;
Fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining final position information of the traffic sign board in the image to be processed;
the traffic sign position recognition model is obtained through training the following steps:
acquiring a plurality of sample images and sample position information of a traffic sign board corresponding to each sample image in the plurality of sample images, wherein the sample position information is used for marking the shape of the traffic sign board;
taking each sample image of the plurality of sample images as input, taking the sample position information corresponding to each sample image of the plurality of sample images as output, and training to obtain the traffic sign position recognition model;
wherein, the sample position information is obtained by the following steps:
carrying out image selection on the sample image through a sliding window to obtain a traffic sign board selection image set;
calculating a selection accuracy value of a traffic sign board selection image in the traffic sign board selection image set, wherein the selection accuracy value is used for representing the ratio of intersection and union between pixels belonging to the traffic sign board in the sample image and all pixels of the traffic sign board in the sample image in the traffic sign board selection image;
Setting a traffic sign board selection image with the selection accuracy value being greater than or equal to a set threshold value as a positive sample traffic sign board selection image;
and fusing all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
2. The method of claim 1, wherein training the traffic sign location recognition model with each of the plurality of sample images as an input and the sample location information corresponding to each of the plurality of sample images as an output comprises:
the following training steps are performed: and sequentially inputting each sample image in the plurality of sample images into an initial traffic sign position recognition model to obtain predicted position information corresponding to each sample image in the plurality of sample images, comparing the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the predicted accuracy of the initial traffic sign position recognition model, determining whether the predicted accuracy is greater than a preset accuracy threshold, and taking the initial traffic sign position recognition model as a trained traffic sign position recognition model if the predicted accuracy is greater than the preset accuracy threshold.
3. The method according to claim 2, wherein the training to obtain the traffic sign position recognition model takes each of the plurality of sample images as input, takes the sample position information corresponding to each of the plurality of sample images as output, and includes:
and adjusting parameters of the initial traffic sign position recognition model in response to the preset accuracy threshold value being not greater than, and continuing to execute the training step.
4. The method of claim 1, wherein the fusing all positive sample traffic sign selection images to obtain sample location information for the sample image comprises:
and carrying out feature extraction on the positions of the traffic signs in the positive sample traffic sign plate selection images through a preset position sliding window for the positive sample traffic sign plate selection images to obtain initial position information of the traffic signs in the positive sample traffic sign plate selection images, wherein the position sliding window comprises at least one of the following components: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window;
And fusing all initial position information corresponding to all positive sample traffic sign board selection images to obtain sample position information of the sample images.
5. The method of any of claims 1 to 4, wherein the characteristic information comprises at least one of: color feature information, shape feature information, texture feature information; and
the fusing the at least one piece of characteristic information corresponding to the at least one scaled image to be processed, and determining the final position information of the traffic sign in the image to be processed comprises the following steps:
performing dimension reduction operation on the fused characteristic information to obtain dimension reduction characteristic information;
fitting the dimension reduction characteristic information to obtain final position information of the traffic sign board in the image to be processed.
6. An apparatus for identifying traffic signs, comprising:
the scaling image obtaining unit is configured to scale the image to be processed according to a set proportion to obtain at least one scaling image to be processed;
the characteristic information acquisition unit is used for importing the zoom to-be-processed image into a pre-trained traffic sign position identification model for the at least one zoom to-be-processed image to obtain position information of traffic signs in the zoom to-be-processed image, and extracting characteristic information at an image position corresponding to the position information, wherein the traffic sign position identification model is used for identifying the position information of the traffic signs of the zoom to-be-processed image through at least one position sliding window;
The traffic sign position identification unit is configured to fuse at least one piece of characteristic information corresponding to the at least one zoomed image to be processed and determine final position information of the traffic sign in the image to be processed;
a traffic sign position recognition model training unit configured to train a traffic sign position recognition model, the traffic sign position recognition model training unit comprising: a sample information obtaining subunit configured to obtain a plurality of sample images and sample position information of a traffic sign board corresponding to each of the plurality of sample images, wherein the sample position information is used for marking a shape of the traffic sign board; the traffic sign position recognition model training subunit is configured to take each sample image of the plurality of sample images as input, take sample position information corresponding to each sample image of the plurality of sample images as output and train to obtain the traffic sign position recognition model;
a sample position information acquisition unit configured to acquire sample position information, the sample position information acquisition unit including: the traffic sign board selection image set acquisition subunit is configured to perform image selection on the sample image through a sliding window to obtain a traffic sign board selection image set; a selection accuracy value calculating subunit configured to calculate a selection accuracy value of a traffic sign selection image in the traffic sign selection image set, wherein the selection accuracy value is used for characterizing a ratio of an intersection between pixels belonging to the traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image to a union; a positive sample selection subunit configured to set a traffic-sign-panel selection image having a selection accuracy value equal to or greater than a set threshold value as a positive sample traffic-sign-panel selection image; and the sample position information acquisition subunit is configured to fuse all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
7. The apparatus of claim 6, wherein the traffic sign position recognition model training subunit comprises:
the traffic sign position recognition model training module is configured to sequentially input each sample image in the plurality of sample images into an initial traffic sign position recognition model to obtain prediction position information corresponding to each sample image in the plurality of sample images, compare the prediction position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the prediction accuracy of the initial traffic sign position recognition model, determine whether the prediction accuracy is greater than a preset accuracy threshold, and if so, take the initial traffic sign position recognition model as a trained traffic sign position recognition model.
8. The apparatus of claim 7, wherein the traffic sign position recognition model training subunit comprises:
and the parameter adjustment module is used for responding to the condition that the parameter is not larger than the preset accuracy threshold value, and is configured to adjust the parameter of the initial traffic sign position recognition model and returning to the traffic sign position recognition model training module.
9. The apparatus of claim 6, wherein the sample location information acquisition subunit comprises:
the feature extraction module is configured to perform feature extraction on the positions of the traffic signs in the positive sample traffic sign selection images through a preset position sliding window for the positive sample traffic sign selection images to obtain initial position information of the traffic signs in the positive sample traffic sign selection images, wherein the position sliding window comprises at least one of the following: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window;
the sample position information acquisition module is configured to fuse all initial position information corresponding to all positive sample traffic sign board selection images to obtain sample position information of the sample images.
10. The apparatus according to any of claims 6 to 9, wherein the characteristic information comprises at least one of: color feature information, shape feature information, texture feature information; and
the traffic sign position recognition unit includes:
the dimension reduction subunit is configured to perform dimension reduction operation on the fused characteristic information to obtain dimension reduction characteristic information;
And the final position information acquisition subunit is configured to fit the dimension reduction characteristic information to obtain final position information of the traffic sign in the image to be processed.
11. An electronic device, comprising:
one or more processors;
a memory having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-5.
12. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1 to 5.
CN202110748454.7A 2019-05-17 2019-05-17 Method and device for identifying traffic sign Active CN113409393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110748454.7A CN113409393B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110748454.7A CN113409393B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign
CN201910412771.4A CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910412771.4A Division CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Publications (2)

Publication Number Publication Date
CN113409393A CN113409393A (en) 2021-09-17
CN113409393B true CN113409393B (en) 2023-10-03

Family

ID=67448476

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110748454.7A Active CN113409393B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign
CN201910412771.4A Active CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910412771.4A Active CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Country Status (1)

Country Link
CN (2) CN113409393B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409393B (en) * 2019-05-17 2023-10-03 百度在线网络技术(北京)有限公司 Method and device for identifying traffic sign
CN111931683B (en) * 2020-08-25 2023-09-05 腾讯科技(深圳)有限公司 Image recognition method, device and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060119674A (en) * 2005-05-18 2006-11-24 엘지전자 주식회사 Method and apparatus for providing prediction information on traffic and using the information
WO2016155371A1 (en) * 2015-03-31 2016-10-06 百度在线网络技术(北京)有限公司 Method and device for recognizing traffic signs
CN108734123A (en) * 2018-05-18 2018-11-02 武昌理工学院 Highway signs recognition methods, electronic equipment, storage medium and system
CN108805018A (en) * 2018-04-27 2018-11-13 淘然视界(杭州)科技有限公司 Road signs detection recognition method, electronic equipment, storage medium and system
CN108985217A (en) * 2018-07-10 2018-12-11 常州大学 A kind of traffic sign recognition method and system based on deep space network
CN109508580A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Traffic lights recognition methods and device
CN110097600B (en) * 2019-05-17 2021-08-06 百度在线网络技术(北京)有限公司 Method and device for identifying traffic sign

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8447139B2 (en) * 2010-04-13 2013-05-21 International Business Machines Corporation Object recognition using Haar features and histograms of oriented gradients
US8620032B2 (en) * 2011-05-10 2013-12-31 GM Global Technology Operations LLC System and method for traffic signal detection
US9904852B2 (en) * 2013-05-23 2018-02-27 Sri International Real-time object detection, tracking and occlusion reasoning
CN103366190B (en) * 2013-07-26 2017-03-29 中国科学院自动化研究所 A kind of method of identification traffic signss
CN104616021B (en) * 2014-12-24 2020-05-05 清华大学 Traffic sign image processing method and device
CN106326288B (en) * 2015-06-30 2019-12-03 阿里巴巴集团控股有限公司 Image search method and device
CN105809121A (en) * 2016-03-03 2016-07-27 电子科技大学 Multi-characteristic synergic traffic sign detection and identification method
CN106682664A (en) * 2016-12-07 2017-05-17 华南理工大学 Water meter disc area detection method based on full convolution recurrent neural network
CN109325438B (en) * 2018-09-18 2021-06-15 桂林电子科技大学 Real-time identification method of live panoramic traffic sign

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060119674A (en) * 2005-05-18 2006-11-24 엘지전자 주식회사 Method and apparatus for providing prediction information on traffic and using the information
WO2016155371A1 (en) * 2015-03-31 2016-10-06 百度在线网络技术(北京)有限公司 Method and device for recognizing traffic signs
CN109508580A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Traffic lights recognition methods and device
CN108805018A (en) * 2018-04-27 2018-11-13 淘然视界(杭州)科技有限公司 Road signs detection recognition method, electronic equipment, storage medium and system
CN108734123A (en) * 2018-05-18 2018-11-02 武昌理工学院 Highway signs recognition methods, electronic equipment, storage medium and system
CN108985217A (en) * 2018-07-10 2018-12-11 常州大学 A kind of traffic sign recognition method and system based on deep space network
CN110097600B (en) * 2019-05-17 2021-08-06 百度在线网络技术(北京)有限公司 Method and device for identifying traffic sign

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐 ; 刘波 ; 蔡自兴 ; 谢斌 ; .基于二维主成分分析的交通标志牌识别.计算机科学.2010,(第11期),293-294. *

Also Published As

Publication number Publication date
CN110097600A (en) 2019-08-06
CN113409393A (en) 2021-09-17
CN110097600B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
CN109508580B (en) Traffic signal lamp identification method and device
CN110163153B (en) Method and device for recognizing traffic sign board boundary
CN113642633B (en) Method, device, equipment and medium for classifying driving scene data
CN110119725B (en) Method and device for detecting signal lamp
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN113409393B (en) Method and device for identifying traffic sign
CN115339453B (en) Vehicle lane change decision information generation method, device, equipment and computer medium
CN110287817B (en) Target recognition and target recognition model training method and device and electronic equipment
CN109903308B (en) Method and device for acquiring information
CN111340880B (en) Method and apparatus for generating predictive model
CN110853364B (en) Data monitoring method and device
CN110135517B (en) Method and device for obtaining vehicle similarity
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN110633598B (en) Method and device for determining a driving area in an environment image
CN111383337B (en) Method and device for identifying objects
CN115497078B (en) Lane line generation method, apparatus, device, and computer-readable medium
CN111488928B (en) Method and device for acquiring samples
CN116452957B (en) Quality detection method and device for image annotation data and electronic equipment
CN116010289B (en) Automatic driving simulation scene test method and device, electronic equipment and readable medium
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN112434591B (en) Lane line determination method and device
CN111523409B (en) Method and device for generating position information
CN116012816A (en) Traffic light identification method and device, electronic equipment and computer readable storage medium
CN116503839A (en) Pavement arrow identification method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant