CN113255609A - Traffic identification recognition method and device based on neural network model - Google Patents

Traffic identification recognition method and device based on neural network model Download PDF

Info

Publication number
CN113255609A
CN113255609A CN202110748994.5A CN202110748994A CN113255609A CN 113255609 A CN113255609 A CN 113255609A CN 202110748994 A CN202110748994 A CN 202110748994A CN 113255609 A CN113255609 A CN 113255609A
Authority
CN
China
Prior art keywords
image
traffic
images
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110748994.5A
Other languages
Chinese (zh)
Other versions
CN113255609B (en
Inventor
贾双成
朱磊
李晓宵
李成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202110748994.5A priority Critical patent/CN113255609B/en
Publication of CN113255609A publication Critical patent/CN113255609A/en
Application granted granted Critical
Publication of CN113255609B publication Critical patent/CN113255609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The application relates to a traffic sign identification method and device based on a neural network model. The method comprises the following steps: acquiring each frame of image containing a traffic identifier in video data; according to the timestamp of each frame of image containing the traffic identification, overlapping two images separated by a set time to generate an image to be identified; and inputting the image to be recognized to a neural network model so that the neural network model outputs the traffic identification in the image to be recognized. The scheme provided by the application can enhance the recognition effect of the neural network model for recognizing the traffic sign and accurately recognize the traffic sign.

Description

Traffic identification recognition method and device based on neural network model
Technical Field
The application relates to the technical field of navigation, in particular to a traffic identification recognition method and device based on a neural network model.
Background
With the rapid development of intelligent automobiles and unmanned technologies, identification of road traffic signs becomes an important component of safe driving. The intelligent automobile can acquire the image containing the road traffic identification, the road traffic identification is recognized from the image, and then intelligent driving of the intelligent automobile is achieved according to the road traffic identification.
Road traffic signs such as ground markings, traffic signs and the like are drawn on the signboards on both sides of the road surface or on the road surface in the form of markings. The identification of the traffic signs drawn on the signboards or on the road surfaces is an important field in intelligent driving, the traffic signs are accurately identified, and the safety factor of road traffic can be improved. The related art recognizes the traffic sign from the image, and cannot accurately recognize the traffic sign due to the influence of the image pickup apparatus, the environmental factors, and the processing method.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a traffic sign recognition method and device based on a neural network model, which can enhance the recognition effect of the neural network model on recognizing the traffic sign and accurately recognize the traffic sign.
The application provides a traffic identification recognition method based on a neural network model in a first aspect, and the method comprises the following steps: acquiring each frame of image containing a traffic identifier in video data;
according to the timestamp of each frame of image containing the traffic identification, overlapping two images separated by a set time to generate an image to be identified;
and inputting the image to be recognized to a neural network model so that the neural network model outputs the traffic identification in the image to be recognized.
Preferably, the acquiring each frame of image containing traffic identification in the video data includes:
and acquiring the images containing the traffic identifications of the time positions according to the time positions of the images containing the traffic identifications in the video data.
Preferably, the generating an image to be recognized by superimposing two images separated by a set time according to the timestamp of each frame of image including the traffic identifier includes:
and superposing two images separated by the set time according to the timestamp of each frame of image containing the traffic identification, selecting a union part of the traffic identification in the two superposed images, selecting an intersection part of the background in the two superposed images, and generating the image to be identified.
Preferably, the generating an image to be recognized by superimposing two images separated by a set time according to the timestamp of each frame of image including the traffic identifier includes:
if the image difference degree of two separated images in the video data is larger than a preset threshold value, setting the separation time of the two separated images as set time according to the time stamps of the two separated images;
and superposing two images separated by a set time according to the time stamp of each frame of image containing the traffic identification to generate an image to be identified.
Preferably, the set time is greater than or equal to 1 second.
A second aspect of the present application provides a traffic sign recognition apparatus based on a neural network model, the apparatus including:
the image acquisition module is used for acquiring each frame of image containing the traffic identification in the video data;
the image generation module is used for superposing two images which are separated by set time according to the time stamp of each frame of image containing the traffic identification, which is acquired by the image acquisition module, so as to generate an image to be identified;
and the input module is used for inputting the image to be recognized generated by the image generation module into a neural network model so as to enable the neural network model to output the traffic identification in the image to be recognized.
Preferably, the image obtaining module is specifically configured to obtain the image containing the traffic identifier at the time position according to the time position of the image containing the traffic identifier in the video data.
Preferably, the image generating module is specifically configured to superimpose two images separated by the set time according to the timestamp of each frame of image including the traffic identifier acquired by the image acquiring module, select a union portion of the traffic identifier in the two superimposed images, select an intersection portion of the background in the two superimposed images, and generate the image to be recognized.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the present application provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform a method as described above.
The technical scheme provided by the application can comprise the following beneficial effects
According to the technical scheme, the two images containing the traffic signs separated by the set time are overlapped, the image to be recognized is generated by overlapping the two images, the traffic signs in the image to be recognized are the union part of the traffic signs in the two overlapped images, the background is the intersection part of the backgrounds in the two overlapped images, the difference degree of the traffic signs in the image to be recognized is enhanced, the traffic signs are better reflected in the image to be recognized, the recognition effect of the neural network model for recognizing the traffic signs can be enhanced, and the traffic signs can be accurately recognized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flow chart of a traffic sign recognition method based on a neural network model according to an embodiment of the present application;
fig. 2 is another schematic flow chart of a traffic sign recognition method based on a neural network model according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a traffic sign recognition apparatus based on a neural network model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
The embodiment of the application provides a traffic sign identification method based on a neural network model, which can enhance the identification effect of the neural network model on identifying traffic signs and accurately identify the traffic signs.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The first embodiment is as follows:
fig. 1 is a schematic flowchart of a traffic identifier recognition method based on a neural network model according to an embodiment of the present application.
Referring to fig. 1, a traffic sign recognition method based on a neural network model includes:
in step 101, each frame of image containing traffic identification in video data is acquired.
In one embodiment, the traffic signs include traffic signs painted on the road surface and painted on the sign board. Each frame of image in the video data has a unique time stamp. The video data is divided into images of consecutive time stamps according to each frame of image having a unique time stamp. And acquiring each frame of image containing the traffic identification in the images of the continuous time stamps.
In step 102, two images separated by a set time are superimposed according to the time stamp of each frame of image containing the traffic identification, and an image to be identified is generated.
In a specific embodiment, two images separated by a set time are superposed according to a timestamp of each frame of image containing a traffic sign, so that the traffic sign and the background of the two images are superposed, and an image to be identified comprising the traffic sign of the two images is generated.
In step 103, the image to be recognized is input to the neural network model, so that the neural network model outputs the traffic identification in the image to be recognized.
In one embodiment, the image to be recognized is input to a neural network model. And the neural network model outputs the traffic identification in the image to be recognized according to the image to be recognized.
According to the traffic sign recognition method based on the neural network model, the two images containing the traffic signs at the set time interval are overlapped, the images to be recognized are generated through the overlapping of the two images, the distinguishing degree of the traffic signs in the images to be recognized is enhanced, the traffic signs in the images to be recognized are more obvious, the recognition effect of the neural network model on recognizing the traffic signs can be enhanced, and the traffic signs can be recognized accurately.
Example two:
fig. 2 is another flow chart of a traffic sign recognition method based on a neural network model according to an embodiment of the present application. Fig. 2 describes the solution of the present application in more detail with respect to fig. 1.
Referring to fig. 2, a traffic sign recognition method based on a neural network model includes:
in step 201, video data including a traffic sign captured by an in-vehicle image capturing apparatus is acquired.
In one embodiment, the traffic signs include traffic signs drawn on the road surface and on the sign, such as lane lines on the road surface, speed limit signs on the sign, and the like, and the lane lines may include dotted lines, solid lines, yellow lines, and guidance lane lines. The vehicle is equipped with an on-board camera device, which may be provided at the front windshield of the vehicle or at the rear-view mirror of the vehicle. During the travel of the vehicle, the in-vehicle image pickup apparatus may photograph the road in front of the vehicle. Video data containing traffic signs in front of the vehicle and shot by the vehicle-mounted camera device can be acquired.
In one embodiment, the vehicle-mounted image pickup device may be a monocular camera or a binocular camera. The monocular camera may be a monocular camera of a driving recorder, or may be a monocular camera of other camera equipment mounted on the vehicle, such as a monocular camera of a mobile phone camera.
In one embodiment, the tachograph may be provided at a front windshield of the vehicle or at a rear view mirror of the vehicle. During the running process of the vehicle, video data which are shot by a vehicle data recorder and contain traffic marks in front of the vehicle can be obtained.
In step 202, according to the time position of the image containing the traffic sign in the video data, the image containing the traffic sign at the time position is obtained.
In a specific embodiment, each frame of image of the video data has a unique timestamp, and the video data can be divided into multiple frames of images according to the unique timestamp of each frame of image in the video data, so as to obtain each frame of image containing the traffic identifier in the video data.
In one embodiment, the image containing the traffic sign at the time position may be obtained according to the time position of the image containing the traffic sign in the video data. The video data containing the traffic identification can be input into the neural network model, the neural network model outputs the image containing the traffic identification through the image recognition model, and the image containing the traffic identification output by the neural network model is obtained.
In one embodiment, an initial time t for recognizing the set calibration object is set when the neural network model starts recognizing the video data0The neural network model identifies the image containing the traffic identification and records the time t1Then, the time position of the image containing the traffic sign is the time difference: t is t1-t0. Each frame image in the video data has a time stamp, the time stamp of the starting image of the video data is t, and the time position of the image containing the traffic identification is as follows: t + (t)1-t0). And naming and storing the identified images containing the traffic identification according to time and position, and reading the file name of the images containing the traffic identification to obtain the time and position of the images.
In step 203, if the image difference degree of two separated images in the video data is greater than a predetermined threshold, the separation time of the two images is set as a set time according to the time stamps of the two separated images.
In one embodiment, the image difference degree refers to a difference degree between two images. And comparing the two images separated by a certain time, and setting the certain time of the two images separated by the certain time as the set time if the image difference degree of the two images separated by the certain time is greater than a preset threshold value. The separation time of two separated images can be obtained by the time stamps of the two separated images, and the difference value of the time stamps of the two separated images is the separation time of the two separated images. And if the image difference degree of the two separated images is smaller than or equal to a preset threshold value, lengthening or shortening the certain time of the two separated images until the image difference degree of the two separated images obtained by comparison is larger than the preset threshold value, and setting the time of the two separated images as the set time.
In one embodiment, the image difference between the two images can be obtained by comparing the pixel values of the corresponding pixel positions of the two images before and after a certain time interval. If the pixel values of the corresponding pixel positions are the same, it indicates that the corresponding pixel positions are consistent, and the statistical number count _ right is added with 1. The total number of pixels of one of the two images is count _ all (total number), and the image disparity Z = (1-count _ right)/count _ all of the two images. And if the image difference degree Z is larger than a preset threshold value, setting the separation time of the two images as the set time.
In one embodiment, the set time may be determined according to an image difference degree of two images separated by a certain time in the video data. Generally, the vehicle speeds of the collected video data are different, the vehicle speed is 10 km/h as the lowest vehicle speed, namely 2.78 m/s, and when the vehicle-mounted camera device shoots the video data at the speed of 2.78 m/s, the image difference degree of two images separated by 1 second in the video data can be obviously reflected. The degree of difference between the images of two images separated by the set time can be greater than the predetermined threshold when the set time is greater than or equal to 1 second.
In step 204, two images separated by a set time are superimposed according to the timestamp of each frame of image containing the traffic sign, a union part of the traffic sign in the two superimposed images is selected, an intersection part of the background in the two superimposed images is selected, and an image to be identified is generated.
In one specific implementation mode, two images separated by a set time are overlapped according to the images containing the traffic marks of the continuous time stamps to generate the image to be recognized. For example, from multiple images of consecutive timestamps: a first image with a time stamp of 1, a second image with a time stamp of 2, a third image with a time stamp of 3, and a fourth image … with a time stamp of 4. The set time may be 1 second, and assuming that the first image with the timestamp of 1 and the third image with the timestamp of 3 are separated by 1 second, the first image with the timestamp of 1 and the third image with the timestamp of 3 are superimposed to generate the image to be recognized.
In a specific implementation mode, two images are overlapped, the traffic marks and the backgrounds of the two images are also overlapped, the traffic mark is selected from the union part of the two images as the traffic mark of the image to be recognized, the background is selected from the intersection part of the two images as the background of the image to be recognized, and a new image to be recognized is generated. The background is the rest of the traffic sign. The traffic identification in the image to be recognized is a union part of the two superimposed images and the background is an intersection part of the two superimposed images, so that the difference of the traffic identification in the image to be recognized is enhanced, the traffic identification is better reflected in the image to be recognized, and the traffic identification is more obvious in the image to be recognized.
In step 205, the image to be recognized is input to the neural network model, so that the neural network model outputs the traffic identification in the image to be recognized.
In one embodiment, the image to be recognized is input to a neural network model. The neural network model can identify the traffic identification in the image to be identified through the traffic identification model and output the traffic identification in the image to be identified.
The embodiment of the application provides a traffic sign recognition method based on a neural network model, two images containing traffic signs at intervals of set time are overlapped, an image to be recognized is generated by overlapping the two images, the traffic signs in the image to be recognized are a union part of the traffic signs in the overlapped image, a background is an intersection part of the backgrounds in the overlapped image, the degree of distinction of the traffic signs in the image to be recognized is enhanced, the traffic signs are better reflected in the image to be recognized, the traffic signs are more obvious in the image to be recognized, the recognition effect of the neural network model on recognizing the traffic signs can be enhanced, and the traffic signs can be accurately recognized.
Example three:
corresponding to the embodiment of the application function implementation method, the application also provides a traffic sign recognition device based on the neural network model, an electronic device and a corresponding embodiment.
Fig. 3 is a schematic structural diagram of a traffic sign recognition apparatus based on a neural network model according to an embodiment of the present application.
Referring to fig. 3, a traffic sign recognition apparatus based on a neural network model includes an image acquisition module 301, an image generation module 302, and an input module 303.
The image obtaining module 301 is configured to obtain each frame of image including a traffic identifier in the video data.
In one embodiment, the traffic signs include traffic signs drawn on the road surface and on the sign, such as lane lines on the road surface, speed limit signs on the sign, and the like, and the lane lines may include dotted lines, solid lines, yellow lines, and guidance lane lines. The vehicle is equipped with an on-board camera device, which may be provided at the front windshield of the vehicle or at the rear-view mirror of the vehicle. During the travel of the vehicle, the in-vehicle image pickup apparatus may photograph the road in front of the vehicle. The image acquisition module 301 may acquire video data including a traffic sign in front of the vehicle captured by the in-vehicle image capturing apparatus.
In one embodiment, the vehicle-mounted image pickup device may be a monocular camera or a binocular camera. The monocular camera may be a monocular camera of a driving recorder, or may be a monocular camera of other camera equipment mounted on the vehicle, such as a monocular camera of a mobile phone camera.
In one embodiment, the tachograph may be provided at a front windshield of the vehicle or at a rear view mirror of the vehicle. During the driving of the vehicle, the image obtaining module 301 may obtain video data including a traffic sign in front of the vehicle, which is captured by a vehicle event data recorder.
The image obtaining module 301 is specifically configured to obtain an image including a traffic identifier at a time position according to the time position of the image including the traffic identifier in the video data.
In a specific embodiment, each frame of image of the video data has a unique timestamp, and the image obtaining module 301 may segment the video data into multiple frames of images according to the unique timestamp of each frame of image in the video data, and obtain each frame of image containing the traffic identifier in the video data.
In one embodiment, the image obtaining module 301 may obtain the image containing the traffic identifier at the time position according to the time position of the image containing the traffic identifier in the video data. The image obtaining module 301 may input video data including a traffic identifier into a neural network model, the neural network model outputs an image including the traffic identifier through an image recognition model, and obtains the image including the traffic identifier output by the neural network model.
The image generating module 302 is configured to superimpose two images separated by a set time according to the timestamp of each frame of image including the traffic identifier, which is acquired by the image acquiring module 301, to generate an image to be identified.
In one embodiment, the image generating module 302 is further configured to set the separation time between two images to be a set time according to the time stamps of the two images if the image difference degree between the two images in the video data is greater than a predetermined threshold.
In one embodiment, the image difference degree refers to a difference degree between two images. The image generating module 302 compares two images separated by a certain time, and sets the certain time of the two images separated by the certain time as a set time if the difference between the two images separated by the certain time is greater than a predetermined threshold. The image generating module 302 may obtain the separation time of two separated images by the time stamps of the two separated images, and the difference value of the time stamps of the two separated images is the separation time of the two separated images. If the image difference degree of the two separated images is smaller than or equal to the predetermined threshold, the image generation module 302 lengthens or shortens the time interval of the two separated images until the image difference degree of the two separated images obtained by comparison is larger than the predetermined threshold, and the time interval of the two images is set as the set time.
In one embodiment, the image generation module 302 may determine the setting time according to an image difference degree between two images separated by a certain time in the video data. Generally, the vehicle speeds of the collected video data are different, the vehicle speed is 10 km/h as the lowest vehicle speed, namely 2.78 m/s, and when the vehicle-mounted camera device shoots the video data at the speed of 2.78 m/s, the image difference degree of two images separated by 1 second in the video data can be obviously reflected. The degree of difference between the images of two images separated by the set time can be greater than the predetermined threshold when the set time is greater than or equal to 1 second.
In an embodiment, the image generating module 302 is specifically configured to superimpose two images separated by a set time according to a timestamp of each frame of image including a traffic identifier acquired by the image acquiring module 301, select a union portion of the traffic identifier in the two superimposed images, select an intersection portion of a background in the two superimposed images, and generate the image to be identified.
In one embodiment, the image generation module 302 generates the image to be recognized by superimposing two images separated by a set time according to the images containing the traffic signs of the continuous time stamps. For example, the image generation module 302 generates, from a plurality of images of consecutive timestamps: a first image with a time stamp of 1, a second image with a time stamp of 2, a third image with a time stamp of 3, and a fourth image … with a time stamp of 4. The set time may be 1 second, and assuming that the first image with the timestamp of 1 and the third image with the timestamp of 3 are separated by 1 second, the image generation module 302 superimposes the first image with the timestamp of 1 and the third image with the timestamp of 3 to generate the image to be recognized.
In a specific embodiment, the image generation module 302 superimposes two images, the traffic identifier and the background of the two images are also superimposed, the image generation module 302 selects the traffic identifier as the traffic identifier of the image to be recognized in the union part of the two images, and selects the background as the background of the image to be recognized in the intersection part of the two images, so as to generate a new image to be recognized. The background is the rest of the traffic sign. The traffic identification in the image to be recognized is a union part of the two superimposed images and the background is an intersection part of the two superimposed images, so that the difference of the traffic identification in the image to be recognized is enhanced, the traffic identification is better reflected in the image to be recognized, and the traffic identification is more obvious in the image to be recognized.
And an input module 303, configured to input the image to be recognized generated by the image generation module 302 to the neural network model, so that the neural network model outputs the traffic identifier in the image to be recognized.
In one embodiment, the input module 303 inputs the image to be recognized to the neural network model. The neural network model can identify the traffic identification in the image to be identified through the traffic identification model and output the traffic identification in the image to be identified.
According to the technical scheme, the two images containing the traffic identification at the set time interval are overlapped, the image to be recognized is generated by overlapping the two images, the traffic identification in the image to be recognized is a union part of the traffic identifications in the overlapped image, the background is an intersection part of the backgrounds in the overlapped image, the difference degree of the traffic identifications in the image to be recognized is enhanced, the traffic identifications are better reflected in the image to be recognized, the traffic identifications are more obvious in the image to be recognized, the recognition effect of the neural network model for recognizing the traffic identifications can be enhanced, and the traffic identifications are accurately recognized.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 4, the electronic device 40 includes a memory 401 and a processor 402.
The Processor 402 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 401 may include various types of storage units, such as a system memory, a Read Only Memory (ROM), and a permanent storage device. Wherein the ROM may store static data or instructions that are required by the processor 402 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 401 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 401 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 401 has stored thereon executable code which, when processed by the processor 402, may cause the processor 402 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or electronic device, server, etc.), causes the processor to perform some or all of the various steps of the above-described methods in accordance with the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A traffic sign recognition method based on a neural network model is characterized by comprising the following steps:
acquiring each frame of image containing a traffic identifier in video data;
according to the timestamp of each frame of image containing the traffic identification, overlapping two images separated by a set time to generate an image to be identified;
and inputting the image to be recognized to a neural network model so that the neural network model outputs the traffic identification in the image to be recognized.
2. The method of claim 1, wherein the obtaining each frame of image containing traffic sign in the video data comprises:
and acquiring the images containing the traffic identifications of the time positions according to the time positions of the images containing the traffic identifications in the video data.
3. The method of claim 1, wherein the generating an image to be recognized by superimposing two images separated by a set time according to the timestamp of each frame of image containing the traffic sign comprises:
and superposing two images separated by the set time according to the timestamp of each frame of image containing the traffic identification, selecting a union part of the traffic identification in the two superposed images, selecting an intersection part of the background in the two superposed images, and generating the image to be identified.
4. The method according to any one of claims 1 to 3, wherein the generating an image to be recognized by superimposing two images separated by a set time according to the timestamp of each frame of image containing the traffic sign comprises:
if the image difference degree of two separated images in the video data is larger than a preset threshold value, setting the separation time of the two separated images as set time according to the time stamps of the two separated images;
and superposing two images separated by set time according to the time stamp of each frame of image containing the traffic identification to generate an image to be identified.
5. The method of claim 4, wherein the set time is greater than or equal to 1 second.
6. A traffic sign recognition device based on a neural network model is characterized by comprising:
the image acquisition module is used for acquiring each frame of image containing the traffic identification in the video data;
the image generation module is used for superposing two images which are separated by set time according to the time stamp of each frame of image containing the traffic identification, which is acquired by the image acquisition module, so as to generate an image to be identified;
and the input module is used for inputting the image to be recognized generated by the image generation module into a neural network model so as to enable the neural network model to output the traffic identification in the image to be recognized.
7. The apparatus of claim 6, wherein:
the image acquisition module is specifically configured to acquire an image containing a traffic identifier at a time position according to the time position of the image containing the traffic identifier in the video data.
8. The apparatus of claim 6, wherein:
the image generation module is specifically configured to superimpose two images separated by the set time according to the timestamp of each frame of image including the traffic identifier acquired by the image acquisition module, select a union part of the traffic identifier in the two superimposed images, select an intersection part of the background in the two superimposed images, and generate the image to be identified.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-5.
10. A non-transitory machine-readable storage medium having executable code stored thereon, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-5.
CN202110748994.5A 2021-07-02 2021-07-02 Traffic identification recognition method and device based on neural network model Active CN113255609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110748994.5A CN113255609B (en) 2021-07-02 2021-07-02 Traffic identification recognition method and device based on neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110748994.5A CN113255609B (en) 2021-07-02 2021-07-02 Traffic identification recognition method and device based on neural network model

Publications (2)

Publication Number Publication Date
CN113255609A true CN113255609A (en) 2021-08-13
CN113255609B CN113255609B (en) 2021-10-29

Family

ID=77190494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110748994.5A Active CN113255609B (en) 2021-07-02 2021-07-02 Traffic identification recognition method and device based on neural network model

Country Status (1)

Country Link
CN (1) CN113255609B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242003A (en) * 2018-08-13 2019-01-18 浙江零跑科技有限公司 Method is determined based on the vehicle-mounted vision system displacement of depth convolutional neural networks
CN110866449A (en) * 2019-10-21 2020-03-06 北京京东尚科信息技术有限公司 Method and device for identifying target object in road
CN111931693A (en) * 2020-08-31 2020-11-13 平安国际智慧城市科技股份有限公司 Traffic sign recognition method, device, terminal and medium based on artificial intelligence
CN112528944A (en) * 2020-12-23 2021-03-19 杭州海康汽车软件有限公司 Image identification method and device, electronic equipment and storage medium
US20210117705A1 (en) * 2019-02-25 2021-04-22 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic image recognition method and apparatus, and computer device and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242003A (en) * 2018-08-13 2019-01-18 浙江零跑科技有限公司 Method is determined based on the vehicle-mounted vision system displacement of depth convolutional neural networks
US20210117705A1 (en) * 2019-02-25 2021-04-22 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic image recognition method and apparatus, and computer device and medium
CN110866449A (en) * 2019-10-21 2020-03-06 北京京东尚科信息技术有限公司 Method and device for identifying target object in road
CN111931693A (en) * 2020-08-31 2020-11-13 平安国际智慧城市科技股份有限公司 Traffic sign recognition method, device, terminal and medium based on artificial intelligence
CN112528944A (en) * 2020-12-23 2021-03-19 杭州海康汽车软件有限公司 Image identification method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
董强: "机载图像拼接关键技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
陈涵深 等: "基于多帧叠加和窗口搜索的快速车道检测", 《计算机科学》 *

Also Published As

Publication number Publication date
CN113255609B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN112069643B (en) Automatic driving simulation scene generation method and device
US9727793B2 (en) System and method for image based vehicle localization
CN111047870A (en) Traffic violation vehicle recognition system, server, and non-volatile storage medium storing vehicle control program
CN111400533B (en) Image screening method, device, electronic equipment and storage medium
CN113112524B (en) Track prediction method and device for moving object in automatic driving and computing equipment
CN115119045A (en) Vehicle-mounted multi-camera-based video generation method and device and vehicle-mounted equipment
CN111262903A (en) Server device and vehicle
CN111930877B (en) Map guideboard generation method and electronic equipment
CN112598743B (en) Pose estimation method and related device for monocular vision image
CA2605837C (en) Vehicle and lane mark recognizer
CN113255609B (en) Traffic identification recognition method and device based on neural network model
CN113465615B (en) Lane line generation method and related device
CN112019925B (en) Video watermark identification processing method and device
CN115620277A (en) Monocular 3D environment sensing method and device, electronic equipment and storage medium
CN111523360B (en) Method and device for identifying pavement marker and monocular camera
CN113724390A (en) Ramp generation method and device
CN115235493A (en) Method and device for automatic driving positioning based on vector map
JP2000003438A (en) Sign recognizing device
Abramowski Analysis of the possibility of using video recorder for the assessment speed of vehicle before the accident
JPS61249199A (en) Vehicle identifier
CN113538546B (en) Target detection method, device and equipment for automatic driving
JP7385118B2 (en) Tire air pressure decrease degree determination device, tire air pressure decrease degree determination method and program
JP2012181691A (en) Method for recognizing patten on road surface, and information recording device for vehicle
CN117824693A (en) Test method, test device, computer equipment and storage medium
CN116863559A (en) Driving video generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant