CN113409393A - Method and device for identifying traffic sign - Google Patents

Method and device for identifying traffic sign Download PDF

Info

Publication number
CN113409393A
CN113409393A CN202110748454.7A CN202110748454A CN113409393A CN 113409393 A CN113409393 A CN 113409393A CN 202110748454 A CN202110748454 A CN 202110748454A CN 113409393 A CN113409393 A CN 113409393A
Authority
CN
China
Prior art keywords
traffic sign
sample
image
position information
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110748454.7A
Other languages
Chinese (zh)
Other versions
CN113409393B (en
Inventor
段旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN202110748454.7A priority Critical patent/CN113409393B/en
Publication of CN113409393A publication Critical patent/CN113409393A/en
Application granted granted Critical
Publication of CN113409393B publication Critical patent/CN113409393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The embodiment of the disclosure discloses a method and a device for identifying a traffic sign board. One embodiment of the method comprises: zooming the image to be processed according to a set proportion to obtain at least one zoomed image to be processed; for a zoomed image to be processed in the at least one zoomed image to be processed, importing the zoomed image to be processed into a pre-trained traffic sign position recognition model to obtain position information corresponding to the traffic sign in the zoomed image to be processed, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign of the zoomed image to be processed through at least one position sliding window; and fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign in the image to be processed. This embodiment improves the accuracy of recognizing the traffic sign.

Description

Method and device for identifying traffic sign
The application is a divisional application of a method and a device for identifying a traffic sign board, the application date of the original application is 2019, 05, and 17, the application number of the original application is 201910412771.4, and the invention creation name of the original application is as follows: a method and apparatus for identifying traffic signs.
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to a method and a device for identifying a traffic sign board.
Background
With the rapid development of intelligent automobiles and unmanned technologies, the identification of traffic sign detection becomes an important component of safe driving. The intelligent automobile can obtain the image containing the traffic sign board, the traffic sign board is identified from the image, and unmanned driving of the intelligent automobile is achieved according to the traffic sign board.
Disclosure of Invention
Embodiments of the present disclosure provide methods and apparatus for identifying traffic signs.
In a first aspect, embodiments of the present disclosure provide a method for identifying a traffic sign, the method comprising: zooming the image to be processed according to a set proportion to obtain at least one zoomed image to be processed; for a zoomed image to be processed in the at least one zoomed image to be processed, importing the zoomed image to be processed into a pre-trained traffic sign position recognition model to obtain position information corresponding to the traffic sign in the zoomed image to be processed, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign of the zoomed image to be processed through at least one position sliding window; and fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign in the image to be processed.
In some embodiments, the traffic sign position recognition model is obtained by training the following steps: acquiring a plurality of sample images and sample position information corresponding to a traffic sign corresponding to each sample image in the plurality of sample images; and training to obtain the traffic sign position recognition model by taking each of the plurality of sample images as an input and the sample position information corresponding to each of the plurality of sample images as an output.
In some embodiments, the training of the traffic sign position recognition model using each of the plurality of sample images as an input and the sample position information corresponding to each of the plurality of sample images as an output includes: the following training steps are performed: sequentially inputting each sample image in the plurality of sample images to an initial traffic sign position recognition model to obtain predicted position information corresponding to each sample image in the plurality of sample images, comparing the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the predicted accuracy of the initial traffic sign position recognition model, determining whether the predicted accuracy is greater than a preset accuracy threshold, and if so, taking the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In some embodiments, the training of the traffic sign position recognition model using each of the plurality of sample images as an input and the sample position information corresponding to each of the plurality of sample images as an output includes: and responding to the condition that the accuracy is not greater than the preset accuracy threshold value, adjusting the parameters of the initial traffic sign position recognition model, and continuing to execute the training step.
In some embodiments, the sample position information is obtained by: selecting images of the sample images through a sliding window to obtain a traffic sign board selection image set; calculating a selection accuracy value of the traffic sign selection image in the traffic sign selection image set, wherein the selection accuracy value is used for representing the ratio of intersection and union between pixels belonging to the traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image; setting the traffic sign selection image with the selection accuracy value larger than or equal to the set threshold value as a positive sample traffic sign selection image; and fusing all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
In some embodiments, the fusing all the positive sample traffic sign selection images to obtain the sample position information of the sample image includes: for the positive sample traffic sign selection image in all the positive sample traffic sign selection images, performing feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window to obtain initial position information of the traffic sign in the positive sample traffic sign selection image, wherein the position sliding window comprises at least one of the following items: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window; and fusing all initial position information corresponding to all the positive sample traffic sign selection images to obtain sample position information of the sample image.
In some embodiments, the characteristic information includes at least one of: color feature information, shape feature information, texture feature information. And the fusing at least one feature information corresponding to the at least one zoomed image to be processed to determine the final position information of the traffic sign in the image to be processed, including: performing dimension reduction operation on the fused feature information to obtain dimension reduction feature information; and fitting the dimensionality reduction characteristic information to obtain the final position information of the traffic sign in the image to be processed.
In a second aspect, embodiments of the present disclosure provide an apparatus for identifying a traffic sign, the apparatus comprising: the zooming image acquisition unit is configured to zoom the image to be processed according to a set proportion to obtain at least one zoomed image to be processed; the characteristic information acquisition unit is used for leading the zoomed image to be processed into a pre-trained traffic sign position recognition model for the zoomed image to be processed in the at least one zoomed image to be processed, obtaining position information corresponding to the traffic sign in the zoomed image to be processed, and extracting characteristic information from the image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign of the zoomed image to be processed through at least one position sliding window; and the traffic sign position identification unit is configured to fuse at least one piece of characteristic information corresponding to the at least one zoomed image to be processed and determine the final position information of the traffic sign in the image to be processed.
In some embodiments, the apparatus includes a traffic sign position recognition model training unit configured to train a traffic sign position recognition model, the traffic sign position recognition model training unit including: a sample information acquiring subunit configured to acquire a plurality of sample images and sample position information corresponding to a traffic sign corresponding to each of the plurality of sample images; and a traffic sign position recognition model training subunit configured to train to obtain the traffic sign position recognition model by taking each of the plurality of sample images as an input and taking the sample position information corresponding to each of the plurality of sample images as an output.
In some embodiments, the traffic sign position recognition model training subunit includes: the traffic sign position recognition model training module is configured to sequentially input each sample image of the plurality of sample images into an initial traffic sign position recognition model, obtain predicted position information corresponding to each sample image of the plurality of sample images, compare the predicted position information corresponding to each sample image of the plurality of sample images with the sample position information corresponding to the sample image to obtain a predicted accuracy of the initial traffic sign position recognition model, determine whether the predicted accuracy is greater than a preset accuracy threshold, and if the predicted accuracy is greater than the preset accuracy threshold, take the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In some embodiments, the traffic sign position recognition model training subunit includes: and the parameter adjusting module is used for responding to the condition that the accuracy rate is not greater than the preset accuracy rate threshold value, adjusting the parameters of the initial traffic sign position recognition model and returning to the traffic sign position recognition model training module.
In some embodiments, the apparatus further includes a sample position information acquiring unit configured to acquire sample position information, the sample position information acquiring unit including: the traffic sign board selection image set acquisition subunit is configured to perform image selection on the sample image through a sliding window to obtain a traffic sign board selection image set; a selection accuracy value calculation subunit configured to calculate a selection accuracy value of a traffic sign selection image in the traffic sign selection image set, where the selection accuracy value is used to represent a ratio of an intersection to a union between pixels belonging to a traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image; a positive sample selection subunit configured to set the traffic sign selection image having the selection accuracy value equal to or greater than a set threshold value as a positive sample traffic sign selection image; and the sample position information acquisition subunit is configured to fuse all the positive sample traffic sign selection images to obtain the sample position information of the sample image.
In some embodiments, the sample position information acquiring subunit includes: a feature extraction module, configured to perform feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window, to obtain initial position information of the traffic sign in the positive sample traffic sign selection image, where the position sliding window includes at least one of: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window; and the sample position information acquisition module is configured to fuse all initial position information corresponding to all the positive sample traffic sign selection images to obtain sample position information of the sample images.
In some embodiments, the characteristic information includes at least one of: color feature information, shape feature information, texture feature information. And, the above-mentioned traffic sign position identification unit includes: the dimensionality reduction subunit is configured to perform dimensionality reduction operation on the fused feature information to obtain dimensionality reduction feature information; and the final position information acquisition subunit is configured to fit the dimensionality reduction characteristic information to obtain final position information of the traffic sign in the image to be processed.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the method for identifying a traffic sign of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, characterized in that the program, when executed by a processor, implements the method for identifying a traffic sign of the first aspect described above.
The method and the device for identifying the traffic sign board provided by the embodiment of the disclosure firstly zoom the to-be-processed image according to a set proportion to obtain at least one zoomed to-be-processed image; then, importing the image to be processed into a pre-trained traffic sign position recognition model to obtain position information of the traffic sign in the image to be processed, and extracting characteristic information from the image position corresponding to the position information; and finally, fusing at least one piece of characteristic information corresponding to at least one zoomed image to be processed, and determining the final position information of the traffic sign in the image to be processed. The application improves the accuracy of identifying the traffic sign board.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method for identifying traffic signs according to the present disclosure;
FIG. 3 is a schematic illustration of one application scenario of a method for identifying traffic signs according to the present disclosure;
FIG. 4 is a flow diagram of one embodiment of a traffic sign position recognition model training method according to the present disclosure;
FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for identifying traffic signs according to the present disclosure;
FIG. 6 is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a method for identifying a traffic sign or an apparatus for identifying a traffic sign to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include vehicles 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the vehicles 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The vehicles 101, 102, 103 interact with a server 105 over a network 104 to receive or send messages and the like. The vehicles 101, 102, 103 may have installed thereon various data processing applications, such as image capture applications, traffic light identification applications, data transmission applications, alert applications, and the like.
The vehicles 101, 102, 103 may be various vehicles having a plurality of data acquisition units and data processing units, including but not limited to unmanned vehicles, manned vehicles, electric vehicles, hybrid gasoline-electric vehicles, and internal combustion engine vehicles, among others.
The server 105 may be a server that provides various services, such as a server that performs image processing on images to be processed including traffic lights sent from the vehicles 101, 102, 103. The server may perform processing such as analysis on the received data such as the image to be processed, and feed back the processing result (e.g., position information of the traffic signboard) to the vehicles 101, 102, 103.
It should be noted that the method for identifying the traffic sign provided by the embodiment of the present disclosure may be performed by the vehicles 101, 102, 103 individually, or may also be performed by the vehicles 101, 102, 103 and the server 105 together. Accordingly, the means for identifying the traffic sign may be provided in the vehicles 101, 102, 103, or in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module, and is not limited specifically herein.
It should be understood that the number of vehicles, networks, and servers in FIG. 1 is merely illustrative. There may be any number of vehicles, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for identifying traffic signs according to the present disclosure is shown. The method for recognizing a traffic sign includes the steps of:
step 201, zooming the image to be processed according to a set proportion to obtain at least one zoomed image to be processed.
In the present embodiment, the subject of execution of the method for recognizing a traffic sign (e.g., vehicles 101, 102, 103 and/or server 105 shown in fig. 1) may acquire the image to be processed by wired connection or wireless connection. Wherein, the image to be processed can be a road image containing a traffic sign. The images to be processed may be obtained by cameras on the vehicles 101, 102, 103, or may be received from other terminal devices (e.g., may be traffic monitoring lenses). It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
In practice, the traffic signs have various shapes. The prior art is easily influenced by various environmental factors when identifying the traffic sign board, generally has better identification effect on the traffic sign board with the specified shape, and has low identification accuracy on the traffic sign boards with other shapes.
In order to improve the accuracy of identifying the traffic signboard, after the execution main body acquires the image to be processed, the image to be processed can be zoomed according to a set proportion, and at least one zoomed image to be processed is obtained.
Step 202, for the zoomed image to be processed in the at least one zoomed image to be processed, importing the zoomed image to be processed into a pre-trained traffic sign position recognition model to obtain position information corresponding to the traffic sign in the zoomed image to be processed, and extracting feature information from the image position corresponding to the position information.
The execution main body can lead the image to be processed in a scaling mode into a pre-trained traffic sign position recognition model to obtain position information corresponding to the traffic sign in the image to be processed in a scaling mode. The position information may be represented by coordinate values of the traffic signboard on the scaled image to be processed. The traffic sign position identification model can be used for identifying the position information of the traffic sign for zooming the image to be processed through at least one position sliding window. The position information may be used, among other things, to mark the structure of the traffic sign. Thereafter, the execution subject may extract feature information at an image position corresponding to the position information. Similarly, the characteristic information can characterize the structural characteristics of the traffic sign.
In some optional implementations of this embodiment, the traffic sign position recognition model is obtained by training through the following steps:
the method comprises the steps of firstly, obtaining a plurality of sample images and sample position information of a traffic sign corresponding to each sample image in the plurality of sample images.
When training the traffic sign position recognition model, the executive body may first obtain the sample image and the sample position information corresponding to the sample image. Wherein the sample image comprises a traffic sign image; the sample position information may be used to mark the shape of the traffic sign.
And secondly, training to obtain the traffic sign position recognition model by taking each sample image of the plurality of sample images as input and the sample position information corresponding to each sample image of the plurality of sample images as output.
The execution subject may train the traffic sign position recognition model through various networks (e.g., may be a convolutional neural network, a deep learning network, etc.). The execution subject can take the sample image as network input, take the sample position information corresponding to the sample image as network output, and train to obtain the traffic sign position recognition model.
In some optional implementation manners of this embodiment, the sample position information may be obtained by:
firstly, image selection is carried out on the sample images through a sliding window, and a traffic sign plate selection image set is obtained.
The execution subject can select images on the sample images through the sliding window to obtain a traffic sign selection image set corresponding to the sample images.
And secondly, calculating the selection accuracy value of the traffic sign selection image in the traffic sign selection image set.
The traffic sign selection image may include all of the traffic sign images or a part of the traffic sign images. The accuracy between the traffic sign image within the traffic sign selection image and the image of the actual traffic sign in the sample image is characterized. The implementer may calculate a selection accuracy value for the traffic sign selection image. The selection accuracy value is used for representing the ratio of the intersection and the union of the pixels belonging to the traffic sign in the sample image and all the pixels of the traffic sign in the sample image in the traffic sign selection image. The selection accuracy value can also be calculated according to the percentage between the pixels belonging to the traffic sign in the sample image in the traffic sign selection image and all the pixels in the traffic sign selection image, and the like, and is specifically determined according to actual needs.
And thirdly, setting the traffic sign selection image with the selection accuracy value larger than or equal to the set threshold value as a positive sample traffic sign selection image.
Thereafter, the executing body may screen the traffic sign selection image by the selection accuracy value. The execution subject may set the traffic sign selection image whose selection accuracy value is equal to or greater than the set threshold value as a positive sample traffic sign selection image.
And fourthly, fusing all the positive sample traffic sign board selection images to obtain sample position information of the sample images.
The positive sample traffic sign selection image contains more traffic sign information. The execution subject may fuse the traffic sign selection image of the positive sample according to the information of the traffic sign included in the traffic sign selection image of the positive sample to obtain the sample position information of the sample image.
In some optional implementation manners of this embodiment, the fusing all the positive sample traffic sign selection images to obtain the sample position information of the sample image may include the following steps:
firstly, for the positive sample traffic sign selection image in all the positive sample traffic sign selection images, performing feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window to obtain initial position information of the traffic sign in the positive sample traffic sign selection image.
As can be seen from the above description, the actual traffic signs have different shapes. In order to obtain the position information of the traffic sign board as accurately as possible, the executive subject can perform feature extraction on the position of the traffic sign board in the positive sample traffic sign board selection image through a preset position sliding window to obtain the initial position information of the traffic sign board in the positive sample traffic sign board selection image. Wherein the position sliding window may include at least one of: a nine-square sliding window, a six-square sliding window (for example, six-square with two rows and three columns, six-square with three rows and two columns), and a four-square sliding window. In addition, the position sliding window may also be other types of windows (for example, a regular hexagon sliding window, a triangular sliding window), which are not described in detail herein. The initial position information may be represented by coordinates of the traffic sign in the sample image in the positive sample traffic sign selection image, or the like.
And secondly, fusing all initial position information corresponding to all the positive sample traffic sign selection images to obtain sample position information of the sample images.
After obtaining the initial position information, the execution subject may fuse the initial position information in a plurality of ways (for example, fitting a coordinate point of a traffic sign corresponding to the initial position information, etc.), so as to obtain sample position information of the sample image.
Step 203, fusing at least one feature information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign in the image to be processed.
After the characteristic information is obtained, the executive body can fuse the characteristic information to obtain the final position information of the traffic sign board. Because the characteristic information is obtained based on the position information, the fused characteristic information can represent the accurate position of the traffic sign. After the final position information of the traffic sign board is obtained, the traffic sign board can be accurately identified.
In some optional implementation manners of this embodiment, the fusing the at least one piece of feature information corresponding to the at least one zoomed image to be processed to determine the final position information of the traffic sign in the image to be processed may include the following steps:
firstly, performing dimension reduction operation on the fused feature information to obtain dimension reduction feature information.
The characteristic information may include at least one of: color feature information, shape feature information, texture feature information, and the like. Therefore, the amount of information of the feature information after fusion is large. In order to perform data processing quickly, the executing agent may perform a dimension reduction operation on the fused feature information to obtain dimension reduction feature information. The dimension reduction operation can be used for extracting representative feature information from the fused feature information to obtain dimension reduction feature information. Specifically, the dimension reduction operation may include: deleting the feature information of which the number is less than the set number; and selecting one piece of feature information from a large amount of same or similar feature information, and deleting other same or similar feature information. The dimension reduction operation may also be in other manners, which are not described in detail herein.
And secondly, fitting the dimension reduction characteristic information to obtain the final position information of the traffic sign in the image to be processed.
The executive body can fit the dimensionality reduction characteristic information to obtain the final position information of the traffic sign board in the image to be processed. Therefore, the rapid processing of the data can be realized, the accurate position information of the traffic sign board can be obtained, and the efficiency of recognizing the traffic sign board is improved.
With continued reference to fig. 3, fig. 3 is a schematic view of an application scenario of the method for recognizing a traffic sign according to the present embodiment. In the application scenario of fig. 3, the vehicle may first zoom the to-be-processed image in fig. 3 according to a set scale to obtain at least one zoomed to-be-processed image; then, the vehicle can lead each image to be processed into a pre-trained traffic sign position recognition model to obtain position information of the traffic sign in the image to be processed correspondingly, and feature information is extracted from the image position corresponding to the position information; finally, the vehicle fuses all the obtained feature information to obtain the final position information of the traffic sign (as shown by a dotted line frame in fig. 3).
The method provided by the embodiment of the disclosure firstly scales the image to be processed according to a set proportion to obtain at least one scaled image to be processed; then, importing the image to be processed into a pre-trained traffic sign position recognition model to obtain position information of the traffic sign in the image to be processed, and extracting characteristic information from the image position corresponding to the position information; and finally, fusing at least one piece of characteristic information corresponding to at least one zoomed image to be processed, and determining the final position information of the traffic sign in the image to be processed. The application improves the accuracy of identifying the traffic sign board.
With further reference to FIG. 4, a flow 400 of one embodiment of a traffic sign location recognition model training method is shown. The process 400 of the traffic sign position recognition model training method comprises the following steps:
step 401, obtaining a plurality of sample images and sample position information of a traffic sign corresponding to each sample image in the plurality of sample images.
In this embodiment, an executing entity (for example, the server 105 shown in fig. 1) of the traffic sign position recognition model training method may obtain a plurality of sample images and sample position information corresponding to each of the plurality of sample images.
Step 402, sequentially inputting each sample image of the plurality of sample images to an initial traffic sign position recognition model, and obtaining predicted position information corresponding to each sample image of the plurality of sample images.
In this embodiment, based on the plurality of sample images acquired in step 401, the executing subject may sequentially input each of the plurality of sample images to the initial traffic sign position recognition model, thereby obtaining predicted position information corresponding to each of the plurality of sample images. Here, the execution body may input each sample image from an input side of the initial traffic sign position recognition model, sequentially perform processing of parameters of each layer in the initial traffic sign position recognition model, and output the sample image from an output side of the initial traffic sign position recognition model, where information output from the output side is predicted position information corresponding to the sample image. The initial traffic sign position recognition model may be an untrained model (for example, a deep learning model, etc.) or an untrained model, and each layer of the model is provided with initialization parameters, and the initialization parameters may be continuously adjusted in the training process of the model.
Step 403, comparing the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the prediction accuracy of the initial traffic sign position identification model.
Based on the predicted position information corresponding to each of the plurality of sample images obtained in step 402, the executive body may compare the predicted position information corresponding to each of the plurality of sample images with the sample position information corresponding to the sample image, so as to obtain the prediction accuracy of the initial traffic sign position identification model. Specifically, if the predicted position information corresponding to one sample image is the same as or similar to the sample position information corresponding to the sample image, the initial traffic sign position identification model is predicted correctly; and if the predicted position information corresponding to one sample image is different from or not similar to the sample position information corresponding to the sample image, the initial traffic sign position recognition model is wrong in prediction. Here, the executing body may calculate a ratio of the predicted correct number to the total number of samples, and take the ratio as the prediction accuracy of the initial traffic sign position recognition model.
Step 404, determining whether the prediction accuracy is greater than a preset accuracy threshold.
Based on the prediction accuracy of the initial traffic sign position recognition model obtained in step 403, the executive agent may compare the prediction accuracy of the initial traffic sign position recognition model with a preset accuracy threshold. If the accuracy is greater than the preset accuracy threshold, go to step 405; if not, go to step 406.
And 405, taking the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In this embodiment, when the prediction accuracy of the initial traffic sign position recognition model is greater than the preset accuracy threshold, it indicates that the training of the initial traffic sign position recognition model is completed, and at this time, the execution subject may use the initial traffic sign position recognition model as the trained traffic sign position recognition model.
And 406, adjusting the parameters of the initial traffic sign position identification model.
In this embodiment, under the condition that the prediction accuracy of the initial traffic signboard position recognition model is not greater than the preset accuracy threshold, the execution subject may adjust parameters of the initial traffic signboard position recognition model, and return to the execution step 402 until a traffic signboard position recognition model capable of acquiring accurate position information is trained.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for identifying traffic signs, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for recognizing a traffic sign of the present embodiment may include: a scaling to-be-processed image acquisition unit 501, a feature information acquisition unit 502, and a traffic signboard position identification unit 503. The zooming image-to-be-processed obtaining unit 501 is configured to zoom an image to be processed according to a set proportion, so as to obtain at least one zooming image to be processed; a feature information obtaining unit 502, configured to, for a zoomed image to be processed in the at least one zoomed image to be processed, import the zoomed image to be processed into a traffic sign position recognition model trained in advance, obtain position information corresponding to a traffic sign in the zoomed image to be processed, and extract feature information at an image position corresponding to the position information, where the traffic sign position recognition model is used to recognize position information of the traffic sign of the zoomed image to be processed through at least one position sliding window; the traffic sign position recognition unit 503 is configured to fuse at least one feature information corresponding to the at least one scaled image to be processed, and determine final position information of the traffic sign in the image to be processed.
In some optional implementations of the embodiment, the apparatus 500 for recognizing a traffic sign includes a traffic sign position recognition model training unit (not shown in the figure) configured to train a traffic sign position recognition model, and the traffic sign position recognition model training unit includes: a sample information acquisition sub-unit (not shown in the figure) and a traffic signboard position recognition model training sub-unit (not shown in the figure). The sample information acquisition subunit is configured to acquire a plurality of sample images and sample position information corresponding to a traffic sign corresponding to each of the plurality of sample images; and a traffic sign position recognition model training subunit configured to train to obtain the traffic sign position recognition model by taking each of the plurality of sample images as an input and taking the sample position information corresponding to each of the plurality of sample images as an output.
In some optional implementations of this embodiment, the training subunit of the traffic sign position recognition model includes: a traffic sign position recognition model training module (not shown in the figures) configured to sequentially input each of the plurality of sample images to an initial traffic sign position recognition model, obtain predicted position information corresponding to each of the plurality of sample images, compare the predicted position information corresponding to each of the plurality of sample images with the sample position information corresponding to the sample image, obtain a predicted accuracy of the initial traffic sign position recognition model, determine whether the predicted accuracy is greater than a preset accuracy threshold, and if so, take the initial traffic sign position recognition model as a trained traffic sign position recognition model.
In some optional implementations of this embodiment, the training subunit of the traffic sign position recognition model may include: and a parameter adjusting module (not shown) which is configured to adjust the parameters of the initial traffic sign position recognition model in response to the accuracy not being greater than the preset accuracy threshold and return to the traffic sign position recognition model training module.
In some optional implementations of the embodiment, the apparatus 500 for identifying a traffic sign may further include a sample position information obtaining unit (not shown in the figure) configured to obtain sample position information, and the sample position information obtaining unit may include: a traffic sign selection image set acquisition subunit (not shown in the figure), a selection accuracy value calculation subunit (not shown in the figure), a positive sample selection subunit (not shown in the figure), and a sample position information acquisition subunit (not shown in the figure). The traffic sign board selection image set acquisition subunit is configured to perform image selection on the sample image through a sliding window to obtain a traffic sign board selection image set; a selection accuracy value calculation subunit configured to calculate a selection accuracy value of a traffic sign selection image in the traffic sign selection image set, where the selection accuracy value is used to represent a ratio of an intersection to a union between pixels belonging to a traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image; a positive sample selection subunit configured to set the traffic sign selection image having the selection accuracy value equal to or greater than a set threshold value as a positive sample traffic sign selection image; and the sample position information acquisition subunit is configured to fuse all the positive sample traffic sign selection images to obtain the sample position information of the sample image.
In some optional implementation manners of this embodiment, the sample position information obtaining subunit may include: a feature extraction module (not shown in the figure) and a sample position information acquisition module (not shown in the figure). The feature extraction module is configured to perform feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window to obtain initial position information of the traffic sign in the positive sample traffic sign selection image, where the position sliding window includes at least one of the following: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window; and the sample position information acquisition module is configured to fuse all initial position information corresponding to all the positive sample traffic sign selection images to obtain sample position information of the sample images.
In some optional implementations of this embodiment, the characteristic information includes at least one of: color feature information, shape feature information, texture feature information. And the traffic signboard position recognition unit 503 may include: a dimension reduction subunit (not shown in the figure) and a final position information acquisition subunit (not shown in the figure). The dimension reduction subunit is configured to perform dimension reduction operation on the fused feature information to obtain dimension reduction feature information; and the final position information acquisition subunit is configured to fit the dimensionality reduction characteristic information to obtain final position information of the traffic sign in the image to be processed.
The present embodiment also provides an electronic device, including: one or more processors; a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the above-described method for identifying a traffic sign.
The present embodiment also provides a computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the above-mentioned method for identifying a traffic sign.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use with an electronic device (e.g., server 105 of FIG. 1) to implement an embodiment of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium mentioned above in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: zooming the image to be processed according to a set proportion to obtain at least one zoomed image to be processed; for a zoomed image to be processed in the at least one zoomed image to be processed, importing the zoomed image to be processed into a pre-trained traffic sign position recognition model to obtain position information corresponding to the traffic sign in the zoomed image to be processed, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic sign of the zoomed image to be processed through at least one position sliding window; and fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign in the image to be processed.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a scaling to-be-processed image acquisition unit, a feature information acquisition unit, and a traffic sign position recognition unit. Here, the names of the cells do not constitute a limitation to the cells themselves in some cases, and for example, the traffic sign position recognition unit may also be described as a "cell for obtaining position information of a traffic sign by characteristic information".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A method for identifying a traffic sign, comprising:
zooming the image to be processed according to a set proportion to obtain at least one zoomed image to be processed;
for a zoomed image to be processed in the at least one zoomed image to be processed, importing the zoomed image to be processed into a pre-trained traffic sign position recognition model to obtain position information corresponding to the traffic sign in the zoomed image to be processed, and extracting feature information from an image position corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the zoomed image to be processed through at least one position sliding window;
fusing at least one piece of characteristic information corresponding to the at least one zoomed image to be processed, and determining the final position information of the traffic sign board in the image to be processed;
the traffic sign position recognition model is obtained by training through the following steps:
acquiring a plurality of sample images and sample position information corresponding to a traffic sign corresponding to each sample image in the plurality of sample images;
taking each sample image of the plurality of sample images as input, taking the sample position information corresponding to each sample image of the plurality of sample images as output, and training to obtain the traffic sign position recognition model;
wherein the sample position information is obtained by:
selecting images of the sample images through a sliding window to obtain a traffic sign board selection image set;
calculating a selection accuracy value of the traffic sign selection image in the traffic sign selection image set, wherein the selection accuracy value is used for representing the ratio of intersection and union between pixels belonging to the traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image;
setting the traffic sign selection image with the selection accuracy value larger than or equal to the set threshold value as a positive sample traffic sign selection image;
and fusing all the selected images of the positive sample traffic sign boards to obtain sample position information of the sample images.
2. The method of claim 1, wherein the training of the traffic sign position recognition model using each of the plurality of sample images as an input and the sample position information corresponding to each of the plurality of sample images as an output comprises:
the following training steps are performed: sequentially inputting each sample image in the plurality of sample images to an initial traffic sign position recognition model to obtain predicted position information corresponding to each sample image in the plurality of sample images, comparing the predicted position information corresponding to each sample image in the plurality of sample images with the sample position information corresponding to the sample image to obtain the predicted accuracy of the initial traffic sign position recognition model, determining whether the predicted accuracy is greater than a preset accuracy threshold, and if so, taking the initial traffic sign position recognition model as a trained traffic sign position recognition model.
3. The method of claim 2, wherein the training of the traffic sign position recognition model using each of the plurality of sample images as an input and the sample position information corresponding to each of the plurality of sample images as an output comprises:
and adjusting the parameters of the initial traffic sign position recognition model in response to the accuracy not being greater than the preset accuracy threshold, and continuing to execute the training step.
4. The method of claim 1, wherein the fusing of all positive sample traffic sign selection images to obtain sample location information for the sample images comprises:
for the positive sample traffic sign selection image in all the positive sample traffic sign selection images, performing feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window to obtain initial position information of the traffic sign in the positive sample traffic sign selection image, wherein the position sliding window comprises at least one of the following items: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window;
and fusing all initial position information corresponding to all the positive sample traffic sign board selection images to obtain sample position information of the sample image.
5. The method of any of claims 1 to 4, wherein the feature information comprises at least one of: color feature information, shape feature information, texture feature information. And
the fusing at least one feature information corresponding to the at least one zoomed image to be processed to determine the final position information of the traffic sign in the image to be processed includes:
performing dimension reduction operation on the fused feature information to obtain dimension reduction feature information;
and fitting the dimensionality reduction characteristic information to obtain the final position information of the traffic sign board in the image to be processed.
6. An apparatus for identifying a traffic sign, comprising:
the zooming image acquisition unit is configured to zoom the image to be processed according to a set proportion to obtain at least one zoomed image to be processed;
the characteristic information acquisition unit is used for importing the zoomed images to be processed into a pre-trained traffic sign position recognition model for the zoomed images to be processed in the at least one zoomed image to be processed, obtaining position information corresponding to the traffic signs in the zoomed images to be processed, and extracting characteristic information from image positions corresponding to the position information, wherein the traffic sign position recognition model is used for recognizing the position information of the traffic signs of the zoomed images to be processed through at least one position sliding window;
the traffic sign position identification unit is configured to fuse at least one piece of feature information corresponding to the at least one zoomed image to be processed and determine final position information of the traffic sign in the image to be processed;
a traffic signboard position recognition model training unit configured to train a traffic signboard position recognition model, the traffic signboard position recognition model training unit including: a sample information acquiring subunit configured to acquire a plurality of sample images and sample position information corresponding to a traffic sign corresponding to each of the plurality of sample images; a traffic signboard position recognition model training subunit configured to train to obtain the traffic signboard position recognition model by taking each of the plurality of sample images as an input and taking the sample position information corresponding to each of the plurality of sample images as an output;
a sample position information acquisition unit configured to acquire sample position information, the sample position information acquisition unit including: the traffic sign board selection image set acquisition subunit is configured to perform image selection on the sample image through a sliding window to obtain a traffic sign board selection image set; a selection accuracy value calculation subunit configured to calculate a selection accuracy value of a traffic sign selection image in the set of traffic sign selection images, wherein the selection accuracy value is used to characterize a ratio of an intersection to a union between pixels belonging to a traffic sign in the sample image and all pixels of the traffic sign in the sample image in the traffic sign selection image; a positive sample selection subunit configured to set the traffic sign selection image having the selection accuracy value equal to or greater than a set threshold value as a positive sample traffic sign selection image; and the sample position information acquisition subunit is configured to fuse all the positive sample traffic sign selection images to obtain the sample position information of the sample image.
7. The apparatus of claim 6, wherein the traffic sign position recognition model training subunit comprises:
the traffic sign position recognition model training module is configured to input each sample image of the plurality of sample images to an initial traffic sign position recognition model in sequence, obtain predicted position information corresponding to each sample image of the plurality of sample images, compare the predicted position information corresponding to each sample image of the plurality of sample images with the sample position information corresponding to the sample image to obtain a predicted accuracy of the initial traffic sign position recognition model, determine whether the predicted accuracy is greater than a preset accuracy threshold, and if the predicted accuracy is greater than the preset accuracy threshold, use the initial traffic sign position recognition model as a trained traffic sign position recognition model.
8. The apparatus of claim 7, wherein the traffic sign position recognition model training subunit comprises:
a parameter adjustment module, responsive to not being greater than the preset accuracy threshold, configured to adjust parameters of the initial traffic sign position recognition model and return to the traffic sign position recognition model training module.
9. The apparatus of claim 6, wherein the sample position information obtaining subunit comprises:
a feature extraction module, configured to perform feature extraction on the position of the traffic sign in the positive sample traffic sign selection image through a preset position sliding window, to obtain initial position information of the traffic sign in the positive sample traffic sign selection image, for a positive sample traffic sign selection image in all the positive sample traffic sign selection images, where the position sliding window includes at least one of: a nine-grid sliding window, a six-grid sliding window and a four-grid sliding window;
and the sample position information acquisition module is configured to fuse all initial position information corresponding to all the positive sample traffic sign selection images to obtain sample position information of the sample image.
10. The apparatus of any of claims 6 to 9, wherein the feature information comprises at least one of: color feature information, shape feature information, texture feature information. And
the traffic sign position recognition unit includes:
the dimensionality reduction subunit is configured to perform dimensionality reduction operation on the fused feature information to obtain dimensionality reduction feature information;
and the final position information acquisition subunit is configured to fit the dimensionality reduction characteristic information to obtain final position information of the traffic sign in the image to be processed.
11. An electronic device, comprising:
one or more processors;
a memory having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202110748454.7A 2019-05-17 2019-05-17 Method and device for identifying traffic sign Active CN113409393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110748454.7A CN113409393B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910412771.4A CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign
CN202110748454.7A CN113409393B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910412771.4A Division CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Publications (2)

Publication Number Publication Date
CN113409393A true CN113409393A (en) 2021-09-17
CN113409393B CN113409393B (en) 2023-10-03

Family

ID=67448476

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110748454.7A Active CN113409393B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign
CN201910412771.4A Active CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910412771.4A Active CN110097600B (en) 2019-05-17 2019-05-17 Method and device for identifying traffic sign

Country Status (1)

Country Link
CN (2) CN113409393B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409393B (en) * 2019-05-17 2023-10-03 百度在线网络技术(北京)有限公司 Method and device for identifying traffic sign
CN111931683B (en) * 2020-08-25 2023-09-05 腾讯科技(深圳)有限公司 Image recognition method, device and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060119674A (en) * 2005-05-18 2006-11-24 엘지전자 주식회사 Method and apparatus for providing prediction information on traffic and using the information
US20110249867A1 (en) * 2010-04-13 2011-10-13 International Business Machines Corporation Detection of objects in digital images
US20140347475A1 (en) * 2013-05-23 2014-11-27 Sri International Real-time object detection, tracking and occlusion reasoning
WO2016155371A1 (en) * 2015-03-31 2016-10-06 百度在线网络技术(北京)有限公司 Method and device for recognizing traffic signs
CN108734123A (en) * 2018-05-18 2018-11-02 武昌理工学院 Highway signs recognition methods, electronic equipment, storage medium and system
CN108805018A (en) * 2018-04-27 2018-11-13 淘然视界(杭州)科技有限公司 Road signs detection recognition method, electronic equipment, storage medium and system
CN108985217A (en) * 2018-07-10 2018-12-11 常州大学 A kind of traffic sign recognition method and system based on deep space network
US20190087673A1 (en) * 2017-09-15 2019-03-21 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for identifying traffic light
CN110097600B (en) * 2019-05-17 2021-08-06 百度在线网络技术(北京)有限公司 Method and device for identifying traffic sign

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620032B2 (en) * 2011-05-10 2013-12-31 GM Global Technology Operations LLC System and method for traffic signal detection
CN103366190B (en) * 2013-07-26 2017-03-29 中国科学院自动化研究所 A kind of method of identification traffic signss
CN104616021B (en) * 2014-12-24 2020-05-05 清华大学 Traffic sign image processing method and device
CN106326288B (en) * 2015-06-30 2019-12-03 阿里巴巴集团控股有限公司 Image search method and device
CN105809121A (en) * 2016-03-03 2016-07-27 电子科技大学 Multi-characteristic synergic traffic sign detection and identification method
CN106682664A (en) * 2016-12-07 2017-05-17 华南理工大学 Water meter disc area detection method based on full convolution recurrent neural network
CN109325438B (en) * 2018-09-18 2021-06-15 桂林电子科技大学 Real-time identification method of live panoramic traffic sign

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060119674A (en) * 2005-05-18 2006-11-24 엘지전자 주식회사 Method and apparatus for providing prediction information on traffic and using the information
US20110249867A1 (en) * 2010-04-13 2011-10-13 International Business Machines Corporation Detection of objects in digital images
US20140347475A1 (en) * 2013-05-23 2014-11-27 Sri International Real-time object detection, tracking and occlusion reasoning
WO2016155371A1 (en) * 2015-03-31 2016-10-06 百度在线网络技术(北京)有限公司 Method and device for recognizing traffic signs
US20190087673A1 (en) * 2017-09-15 2019-03-21 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for identifying traffic light
CN109508580A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Traffic lights recognition methods and device
CN108805018A (en) * 2018-04-27 2018-11-13 淘然视界(杭州)科技有限公司 Road signs detection recognition method, electronic equipment, storage medium and system
CN108734123A (en) * 2018-05-18 2018-11-02 武昌理工学院 Highway signs recognition methods, electronic equipment, storage medium and system
CN108985217A (en) * 2018-07-10 2018-12-11 常州大学 A kind of traffic sign recognition method and system based on deep space network
CN110097600B (en) * 2019-05-17 2021-08-06 百度在线网络技术(北京)有限公司 Method and device for identifying traffic sign

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐;刘波;蔡自兴;谢斌;: "基于二维主成分分析的交通标志牌识别", 计算机科学, no. 11, pages 293 - 294 *

Also Published As

Publication number Publication date
CN110097600A (en) 2019-08-06
CN110097600B (en) 2021-08-06
CN113409393B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN109508580B (en) Traffic signal lamp identification method and device
CN111626208B (en) Method and device for detecting small objects
CN110163153B (en) Method and device for recognizing traffic sign board boundary
CN108921200B (en) Method, apparatus, device and medium for classifying driving scene data
CN110119725B (en) Method and device for detecting signal lamp
SE541962C2 (en) Method and apparatus for detecting vehicle contour based on point cloud data
CN110363098B (en) Violent behavior early warning method and device, readable storage medium and terminal equipment
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN115240157B (en) Method, apparatus, device and computer readable medium for persistence of road scene data
CN110097600B (en) Method and device for identifying traffic sign
CN111310815A (en) Image recognition method and device, electronic equipment and storage medium
CN111340015A (en) Positioning method and device
CN113592033B (en) Oil tank image recognition model training method, oil tank image recognition method and device
CN109903308B (en) Method and device for acquiring information
CN111340880B (en) Method and apparatus for generating predictive model
CN111310595B (en) Method and device for generating information
CN115546767B (en) Data transmission method, device, equipment and computer readable medium
CN110633598B (en) Method and device for determining a driving area in an environment image
CN110135517B (en) Method and device for obtaining vehicle similarity
CN114119973A (en) Spatial distance prediction method and system based on image semantic segmentation network
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN111383337B (en) Method and device for identifying objects
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN111523409B (en) Method and device for generating position information
CN116452957B (en) Quality detection method and device for image annotation data and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant