CN110795977A - Traffic signal identification method and device, storage medium and electronic equipment - Google Patents

Traffic signal identification method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110795977A
CN110795977A CN201910356683.7A CN201910356683A CN110795977A CN 110795977 A CN110795977 A CN 110795977A CN 201910356683 A CN201910356683 A CN 201910356683A CN 110795977 A CN110795977 A CN 110795977A
Authority
CN
China
Prior art keywords
traffic signal
image
signal sub
images
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910356683.7A
Other languages
Chinese (zh)
Other versions
CN110795977B (en
Inventor
侯涛
罗立
李熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Everything Mirror Beijing Computer System Co ltd
Original Assignee
Mobile Internet Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobile Internet Technology Group Co Ltd filed Critical Mobile Internet Technology Group Co Ltd
Priority to CN201910356683.7A priority Critical patent/CN110795977B/en
Publication of CN110795977A publication Critical patent/CN110795977A/en
Application granted granted Critical
Publication of CN110795977B publication Critical patent/CN110795977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The present disclosure relates to a traffic signal recognition method, apparatus, storage medium, and electronic device, the method comprising: inputting an environment image sequence acquired by an image acquisition device into a pre-trained convolutional neural network to obtain the type and position information of a traffic signal sub-image in each frame of environment image; matching the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image, and taking the matched traffic signal sub-images as the areas of the same traffic signal on different environment images; and aiming at each traffic signal, determining the position of the traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images. By adopting the technical scheme, the identification efficiency and accuracy of the traffic signals can be improved.

Description

Traffic signal identification method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a traffic signal identification method, an apparatus, a storage medium, and an electronic device.
Background
The high-precision map refers to a map with high precision and fine definition, and compared with a common map, the high-precision map contains richer information, such as information of traffic signs, traffic lights, the precise types and positions of lane lines and the like, and becomes an important component of an automatic driving/unmanned driving technology.
The original data source for constructing the high-precision map is acquired by a professional acquisition vehicle, and the identification of traffic signals such as traffic signs, traffic lights and the like in the driving environment according to the acquired original data is an important link in constructing the high-precision map.
In the traffic signal identification method in the related art, usually, a plurality of frames of environment images collected by a collection vehicle are processed at first, traffic signal sub-images are identified, the traffic signal sub-images belonging to the same traffic signal are marked manually, and the position of each traffic signal is calculated.
Disclosure of Invention
In order to overcome the problems in the prior art, the present disclosure provides a traffic signal identification method, an apparatus, a storage medium, and an electronic device.
In order to achieve the above object, a first aspect of the embodiments of the present disclosure provides a traffic signal identification method, including:
inputting an environment image sequence acquired by an image acquisition device into a pre-trained convolutional neural network to obtain the type and position information of a traffic signal sub-image in each frame of environment image;
matching the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image, and taking the matched traffic signal sub-images as the areas of the same traffic signal on different environment images;
and aiming at each traffic signal, determining the position of the traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images.
Optionally, the pose information includes a heading angle, and the position information includes a center point coordinate;
the matching of the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image comprises the following steps:
acquiring a course angle difference value of the image acquisition device relative to the acquisition of a first environment image when acquiring a second environment image, wherein the first environment image and the second environment image are environment images of continuous frames;
for each first traffic signal sub-image in the first environment image, selecting candidate second traffic signal sub-images from second traffic signal sub-images with the same type according to the course angle difference value, the type and the center point coordinate of the first traffic signal sub-image and the center point coordinate of a second traffic signal sub-image with the same type as the first traffic signal sub-image in the second environment image;
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as a second traffic signal sub-image matched with the first traffic signal sub-image.
Optionally, the selecting, for each first traffic signal sub-image in the first environment image, a candidate second traffic signal sub-image from second traffic signal sub-images of the same type according to the heading angle difference value, the type and the center point coordinate of the first traffic signal sub-image, and the center point coordinate of a second traffic signal sub-image of the same type as the first traffic signal sub-image in the second environment image includes:
for each first traffic signal sub-image, if the course angle difference value is within a preset course angle difference value range, selecting a second traffic signal sub-image with a central point horizontal coordinate difference value smaller than a preset horizontal coordinate difference value from second traffic signal sub-images with the same type as the candidate second traffic signal sub-image;
if the course angle difference value is larger than the upper limit value of the preset course angle difference value range, and the central point abscissa of the first traffic signal sub-image is located in a first preset abscissa range, selecting a second traffic signal sub-image of which the central point abscissa is located in a second preset abscissa range from second traffic signal sub-images of the same type as the candidate second traffic signal sub-image;
and if the course angle difference value is smaller than the lower limit value of the preset course angle difference value range and the central point abscissa of the first traffic signal sub-image is located in the second preset abscissa range, selecting a second traffic signal sub-image with the central point abscissa located in the first preset abscissa range from the second traffic signal sub-images with the same type as the candidate second traffic signal sub-image.
Optionally, the pose information further includes elevation information;
the matching of the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image further comprises:
acquiring an elevation difference value of the second environment image relative to the first environment image;
if the elevation difference value is within a preset elevation range, screening out candidate second traffic signal sub-images with the difference value of the vertical coordinate of the center point of the first traffic signal sub-image larger than the preset difference value of the vertical coordinate from the obtained candidate second traffic signal sub-images to obtain new candidate second traffic signal sub-images;
if the elevation difference value exceeds the preset elevation range, screening out candidate second traffic signal sub-images of which the vertical coordinate of the center point of the first traffic signal sub-image is smaller than the preset vertical coordinate difference value from the obtained candidate second traffic signal sub-images to obtain new candidate second traffic signal sub-images;
the step of taking the candidate second traffic signal sub-image with the minimum distance between the center point and the center point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as the second traffic signal sub-image matched with the first traffic signal sub-image includes:
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the new candidate second traffic signal sub-image as a second traffic signal sub-image matched with the first traffic signal sub-image.
Optionally, the position information comprises center point coordinates;
the determining the position of each traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images comprises the following steps:
and aiming at each traffic signal, determining the position of the traffic signal lamp according to the coordinates of the central point of the area of the traffic signal on two continuous frames of environment images and the pose information of the image acquisition device when the image acquisition device respectively acquires the two continuous adjacent frames of environment images based on a triangulation algorithm.
A second aspect of the embodiments of the present disclosure provides a traffic signal identification apparatus, including:
the input module is used for inputting the environmental image sequence acquired by the image acquisition device into a pre-trained convolutional neural network to obtain the type and position information of the traffic signal sub-image in each frame of environmental image;
the matching module is used for matching the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image, and taking the matched traffic signal sub-images as the areas of the same traffic signal on different environment images;
and the determining module is used for determining the position of each traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images.
Optionally, the pose information includes a heading angle, and the position information includes a center point coordinate;
the matching module includes:
the first obtaining submodule is used for obtaining a course angle difference value of the image collecting device relative to the first environment image when the image collecting device collects the second environment image, wherein the first environment image and the second environment image are environment images of continuous frames;
the first selection sub-module is used for selecting candidate second traffic signal sub-images from the second traffic signal sub-images with the same type according to the course angle difference value, the type and the center point coordinate of the first traffic signal sub-image and the center point coordinate of a second traffic signal sub-image with the same type as the first traffic signal sub-image in the second environment image;
and the second selection sub-module is used for taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as a second traffic signal sub-image matched with the first traffic signal sub-image.
Optionally, the first selecting submodule is configured to:
for each first traffic signal sub-image, if the course angle difference value is within a preset course angle difference value range, selecting a second traffic signal sub-image with a central point horizontal coordinate difference value smaller than a preset horizontal coordinate difference value from second traffic signal sub-images with the same type as the candidate second traffic signal sub-image;
if the course angle difference value is larger than the upper limit value of the preset course angle difference value range, and the central point abscissa of the first traffic signal sub-image is located in a first preset abscissa range, selecting a second traffic signal sub-image of which the central point abscissa is located in a second preset abscissa range from second traffic signal sub-images of the same type as the candidate second traffic signal sub-image;
and if the course angle difference value is smaller than the lower limit value of the preset course angle difference value range and the central point abscissa of the first traffic signal sub-image is located in the second preset abscissa range, selecting a second traffic signal sub-image with the central point abscissa located in the first preset abscissa range from the second traffic signal sub-images with the same type as the candidate second traffic signal sub-image.
Optionally, the pose information further includes elevation information;
the matching module further comprises:
the second acquisition submodule is used for acquiring an elevation difference value of the second environment image relative to the first environment image;
the first screening submodule is used for screening out candidate second traffic signal sub-images, of which the vertical coordinate difference value of the center point of the first traffic signal sub-image is larger than the preset vertical coordinate difference value, from the obtained candidate second traffic signal sub-images if the elevation difference value is within the preset elevation range, so as to obtain new candidate second traffic signal sub-images;
the second screening sub-module is used for screening out candidate second traffic signal sub-images of which the vertical coordinate of the central point of the first traffic signal sub-image is smaller than the preset vertical coordinate difference value from the obtained candidate second traffic signal sub-images if the elevation difference value exceeds the preset elevation range, so as to obtain new candidate second traffic signal sub-images;
the second selection submodule is used for:
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the new candidate second traffic signal sub-image as a second traffic signal sub-image matched with the first traffic signal sub-image.
Optionally, the position information comprises center point coordinates;
the determining module comprises:
and the determining submodule is used for determining the position of the traffic signal lamp according to the coordinates of the central point of the area of the traffic signal on two continuous frames of environment images and the pose information of the image acquisition device when the two continuous adjacent frames of environment images are respectively acquired on the basis of a triangulation algorithm aiming at each traffic signal.
A third aspect of the embodiments of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, performs the steps of the method of the first aspect of the embodiments of the present disclosure.
A fourth aspect of the embodiments of the present disclosure provides an electronic device, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the embodiments of the present disclosure.
Through the technical scheme provided by the disclosure, the following technical effects can be at least achieved:
the method comprises the steps of inputting an environment image sequence into a pre-trained convolutional neural network model, automatically identifying traffic signal sub-images in each frame of environment image, identifying corresponding traffic signal sub-images of the same traffic signal on different environment images according to pose information of an image acquisition device, types of the traffic signal sub-images and position information of the traffic signal sub-images in the environment image, and finally determining the positions of the traffic signals according to the pose information of the image acquisition device and the position information of the traffic signal sub-images on the different environment images corresponding to the traffic signals for each traffic signal. Further, after the traffic signal is identified by the above traffic signal identification method, the identification result including the type and location of the traffic signal may be loaded to a road network file established in advance, and the result may be displayed in a two-dimensional or three-dimensional view format so as to compare the difference between the identification result and the result marked manually. In addition, based on the identified traffic signals, powerful support can be provided for high-precision map semantic generation so as to be applied to automatic driving and automatic driving simulation tests.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a traffic signal identification method according to an exemplary embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a traffic signal sub-image matching method according to an exemplary embodiment of the present disclosure;
fig. 3 is a flowchart illustrating another traffic signal sub-image matching method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a method of traffic signal localization according to an exemplary embodiment of the present disclosure;
FIG. 5 is a block diagram illustrating a traffic signal identification device according to an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating another traffic signal identification device according to an exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device shown in accordance with an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
The terms "first," "second," and the like in the embodiments of the present disclosure are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The disclosed embodiments provide a traffic signal identification method, which may be implemented by an electronic device. As shown in fig. 1, fig. 1 is a flowchart illustrating a traffic signal identification method according to an exemplary embodiment of the present disclosure, the method including:
s101, inputting the environmental image sequence acquired by the image acquisition device into a pre-trained convolutional neural network to obtain the type and position information of the traffic signal sub-image in each frame of environmental image.
The convolutional neural network can be obtained by training an existing convolutional neural network structure based on training samples by using a back propagation algorithm. The convolutional neural network may include a plurality of convolutional layers and a top fully-connected layer, and associated weight and pooling layers. The training samples may include a large number of environment images labeled with information such as types and positions of traffic signal sub-images.
The traffic signal sub-image is an area used for indicating a traffic signal on the environment image, and when the preset convolutional neural network is used for identification, the obtained traffic signal sub-image can be a rectangular frame used for indicating the traffic signal, and the traffic signal is located in the corresponding rectangular frame. Accordingly, each traffic signal sub-image has a corresponding position on the environment image, and the position information thereof may include, but is not limited to, the pixel coordinates of the center point of the traffic signal sub-image (hereinafter, referred to as center point coordinates) and the pixel coordinates of each vertex (hereinafter, referred to as vertex coordinates).
It is worth mentioning that in the embodiments of the present disclosure, traffic signals may be broadly understood to include traffic signs, traffic lights, and the like. The type of the traffic signal sub-image refers to a type of the traffic signal indicated by the traffic signal sub-image, and may include, but is not limited to: the traffic signal indicator is used for transmitting road information and indicating the traveling of vehicles and pedestrians, a static mark for prohibiting or limiting certain traffic behaviors of the vehicles and the pedestrians, a warning mark for warning the vehicles and the pedestrians to pay attention to the front of the road, a traffic signal lamp and the like. Further, the types can also include sub-types under each large type, for example, sub-types of straight running, turning left, motor vehicle type, parking space and the like under the indication sign, sub-types of parking yielding, deceleration yielding and turning around and the like under the prohibition sign, sub-types of attention pedestrian, attention children, construction and the like under the warning sign, and sub-types of red light, green light, yellow light and the like under the traffic light.
In addition, the environment image collected by the image collecting device can be a panoramic image.
And S102, matching the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image, and taking the matched traffic signal sub-images as the areas of the same traffic signal on different environment images.
The pose information of the image acquisition device may include, but is not limited to, coordinates, a heading angle, a roll angle, a pitch angle, elevation information, and the like of the image acquisition device in the projection of the universal transverse-axis mercator.
For some road sections with complex road conditions (such as cross road conditions), multiple traffic signals (such as multiple straight-going signs) with the same type may exist in the same acquired frame of environment image, and in order to accurately identify the traffic signals to build a high-precision map, all the traffic signal sub-images in the environment image sequence need to be matched to identify the traffic signal sub-images representing the same traffic signal.
S103, for each traffic signal, determining the position of the traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images.
The position of the traffic signal refers to coordinates of the traffic signal in a world coordinate system.
Further, after the step S101 is implemented, the traffic signal identification method according to the embodiment may further include: and aiming at each frame of environment image, setting keywords for each traffic signal sub-image according to the type of each traffic signal sub-image in the environment image, and storing each traffic signal sub-image and the corresponding keywords by using a dictionary. In this case, the keyword may be set according to actual needs to distinguish different types of traffic signal sub-images, for example, for a traffic signal sub-image whose type is the prohibition flag, the keyword may be set to "prohibition" or "pne".
In order to make the technical solutions provided by the embodiments of the present disclosure easier for those skilled in the art, the following describes in detail each step of the traffic signal identification method described in the above embodiments by a specific implementation manner.
First, the above step S102 is described, i.e. how to match the obtained traffic signal sub-image according to the pose information of the image capturing device when capturing each frame of environment image, and the type and position information of the traffic signal sub-image in each frame of environment image.
Because the image acquisition device is arranged on the acquisition vehicle, when the acquisition vehicle moves straight, the course angle change of the image acquisition device is smaller, and further the transverse position change of the traffic signal sub-image of the same traffic signal in the environment images of the continuous frames is smaller; when the collection vehicle turns, the course angle change of the image collection device is large, and further the transverse position change of the traffic signal sub-image of the same traffic signal in the environment image of the continuous frame is large. Based on the two points, in a possible implementation manner, the traffic signal sub-images in different environment images can be subjected to lateral position constraint according to the change condition of the course angle of the image acquisition device and the lateral positions of the traffic signal sub-images in different environment images, so that traffic signal sub-images (namely candidate traffic signal sub-images) which may be the same traffic signal in different environment images are obtained, and the traffic signal sub-images of the same traffic signal in different environment images are further selected from the candidate traffic signal sub-images.
In a specific implementation, as shown in fig. 2, the step S102 may include:
s211, acquiring a course angle difference value of the image acquisition device relative to the first environment image during acquisition of the second environment image.
The first environment image and the second environment image are environment images of continuous frames.
S212, aiming at each first traffic signal sub-image in the first environment image, selecting candidate second traffic signal sub-images from the second traffic signal sub-images with the same type according to the course angle difference value, the type and the center point coordinate of the first traffic signal sub-image and the center point coordinate of a second traffic signal sub-image with the same type as the first traffic signal sub-image in the second environment image.
The first traffic signal sub-image is a traffic signal sub-image located on the first environment image, and the second traffic signal sub-image is a traffic signal sub-image located on the second environment image.
For example, for each first traffic signal sub-image, if the heading angle difference is within the preset heading angle difference range, the vehicle may be considered to be collecting to go straight, and at this time, the lateral position change of the same traffic signal in the traffic signal sub-images in the continuous frame environment image is small. Therefore, the second traffic signal sub-image having the same type as the first traffic signal sub-image may be selected from the second environment image, and the second traffic signal sub-image having the horizontal coordinate difference smaller than the preset horizontal coordinate difference from the center point of the first traffic signal sub-image may be selected from the second traffic signal sub-images having the same type as the first traffic signal sub-image as the candidate second traffic signal sub-image. The preset course angle range may be set as needed, for example, in consideration of an error, an upper limit value thereof is set to a positive number close to 0 (e.g., 0.5), and a lower limit value thereof is set to a negative number close to 0 (e.g., -0.5).
If the course angle difference value is larger than the upper limit value of the preset course angle difference value range, the collected vehicle can be considered to turn right, at this moment, the traffic signal sub-image positioned on the left side in the previous frame of environment image is deviated to the right side position, and the traffic signal sub-image belonging to the same signal in the next frame of environment image is deviated to the right side position. Therefore, if the course angle difference value is larger than the upper limit value of the preset course angle difference value range, and the central point abscissa of the first traffic signal sub-image is located in the first preset abscissa range, the second traffic signal sub-image with the central point abscissa located in the second preset abscissa range is selected from the second traffic signal sub-images with the same type, and the second traffic signal sub-image is used as a candidate second traffic signal sub-image.
If the course angle difference value is smaller than the lower limit value of the preset course angle difference value range, the collected vehicle can be considered to turn left, at this moment, the traffic signal sub-image positioned on the right side in the previous frame of environment image is deviated to the left side position, and the traffic signal sub-image belonging to the same signal in the next frame of environment image is deviated to the left side position. Therefore, if the heading angle difference value is smaller than the lower limit value of the preset Hangzhou pepper difference value range, and the central abscissa of the first traffic signal sub-image is located within a second preset abscissa range, the second traffic signal sub-image with the central abscissa located within the first preset abscissa range is selected from the second traffic signal sub-images with the same type, and the second traffic signal sub-image is used as a candidate second traffic signal sub-image.
The first preset abscissa range and the second preset abscissa range may be set according to a specific embodiment. Alternatively, the first preset abscissa range may be set to be greater than 0, and the second preset abscissa range may be set to be less than 0.
Optionally, the first traffic signal sub-images of the same type in the first environment image may be sorted in an ascending order, and the second traffic signal sub-images of the same type in the second environment image may be sorted in an ascending order. And setting a first preset abscissa range and a second preset abscissa range according to respective sorting results of the first traffic signal sub-image and the second traffic signal sub-image of the same type. For example, the first preset abscissa range is set as the front preset number of bits, and the second preset abscissa range is set as the rear preset number of bits.
And S213, taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as a second traffic signal sub-image matched with the first traffic signal sub-image.
After the candidate second traffic signal sub-images are selected, the distance between the center point of the first traffic signal sub-image and the center point of each candidate second traffic signal sub-image can be calculated according to the center point coordinates of the first traffic signal sub-image and the center point coordinates of each candidate second traffic signal sub-image, and the candidate traffic signal sub-image with the minimum distance is selected from the candidate second traffic signal sub-images to serve as the second traffic signal sub-image matched with the first traffic signal sub-image.
In another alternative implementation, it is considered that the height of the image capturing device changes due to the road undulation, thereby causing the longitudinal position of the traffic signal sub-image of the same traffic signal in the environment images of the consecutive frames to change. Based on the point, after the traffic signal sub-images in different environment images are subjected to transverse position constraint to obtain candidate traffic signal sub-images, the elevation information of the image acquisition device can be utilized to carry out longitudinal position constraint on the candidate traffic signal sub-images, candidate traffic signal sub-images with longitudinal positions not meeting requirements are further screened out, and the remaining candidate traffic signal sub-images after screening out are used as new candidate traffic signal sub-images. Further, the traffic signal sub-images of the same traffic signal in different environment images are selected from the new candidate traffic signal sub-images.
In a specific implementation, as shown in fig. 3, the step S102 further includes:
s214, acquiring an elevation difference value of the second environment image relative to the first environment image.
If the elevation difference is within the preset elevation range, the road surface undulation may be considered to be small, and at this time, the change of the ordinate of the central point of the same traffic signal in the continuous frame environmental image is small, and accordingly, step S215 is performed. If the elevation difference exceeds the preset elevation range, the road surface undulation is considered to be large, at this time, the change of the vertical coordinate of the central point of the same traffic signal in the continuous frame environment image is large, and correspondingly, step S216 is executed.
S215, screening out candidate second traffic signal sub-images from the obtained candidate second traffic signal sub-images, wherein the difference value of the vertical coordinate of the central point of the candidate second traffic signal sub-images and the difference value of the vertical coordinate of the central point of the first traffic signal sub-images are larger than a preset difference value of the vertical coordinate, and obtaining new candidate second traffic signal sub-images.
S216, screening out candidate second traffic signal sub-images, of which the difference value of the longitudinal coordinate of the central point of the first traffic signal sub-image is smaller than the preset longitudinal coordinate difference value, from the obtained candidate second traffic signal sub-images to obtain new candidate second traffic signal sub-images.
Wherein, predetermine the vertical coordinate difference and can set up as required.
Accordingly, when the above-mentioned step S103 is performed, the candidate second traffic signal sub-image having the smallest distance between the center point and the center point of the first traffic signal sub-image in the new second traffic signal sub-image is used as the second traffic signal sub-image matched with the first traffic signal sub-image.
Next, the above step S103 is described in detail, i.e. how to determine the position of each traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image capturing device when capturing the different environment images.
In an optional implementation manner, environment images of consecutive frames, for example, two consecutive frames of environment images, may be selected, and for each traffic signal, based on a triangulation algorithm, the position of the traffic signal lamp is determined according to coordinates of a central point of an area on the two consecutive frames of environment images and pose information of the image acquisition device when the two consecutive frames of environment images are respectively acquired.
In particular implementation, as shown in FIG. 4, for any traffic signal, OlThe point is the central point position, O, of the image acquisition device when acquiring the first environment image LrThe point is the central point position P of the image acquisition device when acquiring the second environment image RlThe point is the central point of the first traffic signal sub-image of the traffic signal in the first image LPosition, PrThe point is the central point position of the traffic signal in the second traffic signal sub-image in the second image R. According to OlDot and OrThe respective coordinates and attitude angles (including heading, roll and pitch) of the points, and PlPoint sum PrCalculating the respective coordinates of the points by using a numerical approximation algorithmlPoint sum PlConnecting line between points and OrPoint sum PrAnd coordinates of an intersection point P of connecting lines between the points in the world coordinate system are used as the positions of the traffic signals.
It is noted that for simplicity of description, the above method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present disclosure is not limited by the order of acts or combination of acts described. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required in order to implement the disclosure.
By the traffic signal identification method, the environment image sequence is input into the convolutional neural network model trained in advance, the traffic signal sub-images in each frame of environment image can be automatically identified, the corresponding traffic signal sub-images of the same traffic signal on different environment images are identified according to the pose information of the image acquisition device, the types of the traffic signal sub-images and the position information of the traffic signal sub-images in the environment images, and finally, for each traffic signal, the position of the traffic signal is determined according to the pose information of the image acquisition device and the position information of the traffic signal sub-image on the different environment images corresponding to the traffic signal, the whole identification process does not need manual participation, and compared with the prior art that the identification is manually participated, the identification efficiency and accuracy are improved, the labor cost is saved, and further the efficiency of constructing the whole high-precision map is improved. Further, after the traffic signal is identified by the above traffic signal identification method, the identification result including the type and location of the traffic signal may be loaded to a road network file established in advance, and the result may be displayed in a two-dimensional or three-dimensional view format so as to compare the difference between the identification result and the result marked manually. In addition, based on the identified traffic signals, powerful support can be provided for high-precision map semantic generation so as to be applied to automatic driving and automatic driving simulation tests.
An embodiment of the present disclosure further provides a traffic signal identification apparatus, which may be applied to an electronic device, as shown in fig. 5, where fig. 5 is a block diagram of a traffic signal identification apparatus according to an exemplary embodiment of the present disclosure, and the apparatus 500 includes:
the input module 501 is configured to input an environmental image sequence acquired by an image acquisition device to a pre-trained convolutional neural network, so as to obtain the type and position information of a traffic signal sub-image in each frame of environmental image;
the matching module 502 is configured to match the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-image in each frame of environment image, and use the matched traffic signal sub-images as areas of the same traffic signal on different environment images;
a determining module 503, configured to determine, for each traffic signal, a position of the traffic signal according to position information of an area of the traffic signal on a different environment image and pose information of the image acquisition apparatus when acquiring the different environment image.
Optionally, the pose information includes a heading angle, and the position information includes a center point coordinate; as shown in fig. 6, the matching module 502 includes:
the first obtaining submodule 521 is configured to obtain a heading angle difference value of the image acquisition device when acquiring a second environment image relative to when acquiring a first environment image, where the first environment image and the second environment image are environment images of consecutive frames;
a first selecting sub-module 522, configured to select, for each first traffic signal sub-image in the first environment image, a candidate second traffic signal sub-image from the second traffic signal sub-images of the same type according to the heading angle difference, the type and the center point coordinate of the first traffic signal sub-image, and the center point coordinate of a second traffic signal sub-image in the second environment image that is of the same type as the first traffic signal sub-image;
the second selecting sub-module 523 is configured to use the candidate second traffic signal sub-image with the smallest distance between the center point and the center point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as the second traffic signal sub-image matched with the first traffic signal sub-image.
Optionally, the first selecting submodule is configured to:
for each first traffic signal sub-image, if the course angle difference value is within a preset course angle difference value range, selecting a second traffic signal sub-image with a central point horizontal coordinate difference value smaller than a preset horizontal coordinate difference value from second traffic signal sub-images with the same type as the candidate second traffic signal sub-image;
if the course angle difference value is larger than the upper limit value of the preset course angle difference value range, and the central point abscissa of the first traffic signal sub-image is located in a first preset abscissa range, selecting a second traffic signal sub-image of which the central point abscissa is located in a second preset abscissa range from second traffic signal sub-images of the same type as the candidate second traffic signal sub-image;
and if the course angle difference value is smaller than the lower limit value of the preset course angle difference value range and the central point abscissa of the first traffic signal sub-image is located in the second preset abscissa range, selecting a second traffic signal sub-image with the central point abscissa located in the first preset abscissa range from the second traffic signal sub-images with the same type as the candidate second traffic signal sub-image.
Optionally, the pose information further includes elevation information; as shown in fig. 6, the matching module 502 further includes:
a second obtaining sub-module 524, configured to obtain an elevation difference value of the second environmental image with respect to the first environmental image;
the first screening sub-module 525 is configured to screen out, from the obtained candidate second traffic signal sub-images, candidate second traffic signal sub-images in which a difference value of a center point vertical coordinate of the candidate second traffic signal sub-image is greater than a preset vertical coordinate difference value if the elevation difference value is within a preset elevation range, so as to obtain new candidate second traffic signal sub-images;
the second screening sub-module 526 is configured to screen, if the elevation difference exceeds the preset elevation range, a candidate second traffic signal sub-image whose central point ordinate of the candidate second traffic signal sub-image is smaller than the preset ordinate difference from the first traffic signal sub-image, so as to obtain a new candidate second traffic signal sub-image;
the second selection submodule 523 is configured to:
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the new candidate second traffic signal sub-image as a second traffic signal sub-image matched with the first traffic signal sub-image.
Optionally, the position information comprises center point coordinates; as shown in fig. 6, the determining module 503 includes:
the determining submodule 531 is configured to determine, for each traffic signal, a position of the traffic signal lamp according to a coordinate of a central point of an area of the traffic signal on two consecutive frames of environment images and pose information of the image acquisition device when the two consecutive frames of environment images are respectively acquired based on a triangulation algorithm.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the functional module, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
By adopting the traffic signal recognition device, the environmental image sequence is input into the convolutional neural network model trained in advance, the traffic signal sub-images in each frame of environment image can be automatically identified, the corresponding traffic signal sub-images of the same traffic signal on different environment images are identified according to the pose information of the image acquisition device, the types of the traffic signal sub-images and the position information of the traffic signal sub-images in the environment images, and finally, for each traffic signal, the position of the traffic signal is determined according to the pose information of the image acquisition device and the position information of the traffic signal sub-image on the different environment images corresponding to the traffic signal, the whole identification process does not need manual participation, and compared with the prior art that the identification is manually participated, the identification efficiency and accuracy are improved, the labor cost is saved, and further the efficiency of constructing the whole high-precision map is improved. Further, after the traffic signal is identified by the above traffic signal identification method, the identification result including the type and location of the traffic signal may be loaded to a road network file established in advance, and the result may be displayed in a two-dimensional or three-dimensional view format so as to compare the difference between the identification result and the result marked manually. In addition, based on the identified traffic signals, powerful support can be provided for high-precision map semantic generation so as to be applied to automatic driving and automatic driving simulation tests.
The disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the traffic signal identification method provided by the above-mentioned method embodiments.
The disclosed embodiments also provide an electronic device, which may be provided as a server, including: a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the traffic signal identification method provided by the above method embodiments.
Fig. 7 is a schematic structural diagram of the electronic device, and as shown in fig. 7, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the traffic signal identification method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding communication component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described traffic Signal identification method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the traffic signal identification method described above is also provided. For example, the computer readable storage medium may be the memory 702 described above that includes program instructions executable by the processor 701 of the electronic device 700 to perform the traffic signal identification method described above.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (12)

1. A traffic signal identification method, comprising:
inputting an environment image sequence acquired by an image acquisition device into a pre-trained convolutional neural network to obtain the type and position information of a traffic signal sub-image in each frame of environment image;
matching the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image, and taking the matched traffic signal sub-images as the areas of the same traffic signal on different environment images;
and aiming at each traffic signal, determining the position of the traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images.
2. The method of claim 1, wherein the pose information comprises a heading angle and the position information comprises center point coordinates;
the matching of the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image comprises the following steps:
acquiring a course angle difference value of the image acquisition device relative to the acquisition of a first environment image when acquiring a second environment image, wherein the first environment image and the second environment image are environment images of continuous frames;
for each first traffic signal sub-image in the first environment image, selecting candidate second traffic signal sub-images from second traffic signal sub-images with the same type according to the course angle difference value, the type and the center point coordinate of the first traffic signal sub-image and the center point coordinate of a second traffic signal sub-image with the same type as the first traffic signal sub-image in the second environment image;
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as a second traffic signal sub-image matched with the first traffic signal sub-image.
3. The method according to claim 2, wherein for each of the first traffic signal sub-images in the first environment image, selecting a candidate second traffic signal sub-image from the second traffic signal sub-images of the same type according to the heading angle difference value, the type and center point coordinates of the first traffic signal sub-image, and the center point coordinates of a second traffic signal sub-image of the same type as the first traffic signal sub-image in the second environment image comprises:
for each first traffic signal sub-image, if the course angle difference value is within a preset course angle difference value range, selecting a second traffic signal sub-image with a central point horizontal coordinate difference value smaller than a preset horizontal coordinate difference value from second traffic signal sub-images with the same type as the candidate second traffic signal sub-image;
if the course angle difference value is larger than the upper limit value of the preset course angle difference value range, and the central point abscissa of the first traffic signal sub-image is located in a first preset abscissa range, selecting a second traffic signal sub-image of which the central point abscissa is located in a second preset abscissa range from second traffic signal sub-images of the same type as the candidate second traffic signal sub-image;
and if the course angle difference value is smaller than the lower limit value of the preset course angle difference value range and the central point abscissa of the first traffic signal sub-image is located in the second preset abscissa range, selecting a second traffic signal sub-image with the central point abscissa located in the first preset abscissa range from the second traffic signal sub-images with the same type as the candidate second traffic signal sub-image.
4. The method according to claim 2 or 3, characterized in that the pose information further comprises elevation information;
the matching of the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image further comprises:
acquiring an elevation difference value of the second environment image relative to the first environment image;
if the elevation difference value is within a preset elevation range, screening out candidate second traffic signal sub-images with the difference value of the vertical coordinate of the center point of the first traffic signal sub-image larger than the preset difference value of the vertical coordinate from the obtained candidate second traffic signal sub-images to obtain new candidate second traffic signal sub-images;
if the elevation difference value exceeds the preset elevation range, screening out candidate second traffic signal sub-images of which the vertical coordinate of the center point of the first traffic signal sub-image is smaller than the preset vertical coordinate difference value from the obtained candidate second traffic signal sub-images to obtain new candidate second traffic signal sub-images;
the step of taking the candidate second traffic signal sub-image with the minimum distance between the center point and the center point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as the second traffic signal sub-image matched with the first traffic signal sub-image includes:
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the new candidate second traffic signal sub-image as a second traffic signal sub-image matched with the first traffic signal sub-image.
5. The method of claim 1, wherein the location information comprises center point coordinates;
the determining the position of each traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images comprises the following steps:
and aiming at each traffic signal, determining the position of the traffic signal lamp according to the coordinates of the central point of the area of the traffic signal on two continuous frames of environment images and the pose information of the image acquisition device when the image acquisition device respectively acquires the two continuous adjacent frames of environment images based on a triangulation algorithm.
6. A traffic signal identifying apparatus, comprising:
the input module is used for inputting the environmental image sequence acquired by the image acquisition device into a pre-trained convolutional neural network to obtain the type and position information of the traffic signal sub-image in each frame of environmental image;
the matching module is used for matching the obtained traffic signal sub-images according to the pose information of the image acquisition device when acquiring each frame of environment image and the type and position information of the traffic signal sub-images in each frame of environment image, and taking the matched traffic signal sub-images as the areas of the same traffic signal on different environment images;
and the determining module is used for determining the position of each traffic signal according to the position information of the area of the traffic signal on the different environment images and the pose information of the image acquisition device when acquiring the different environment images.
7. The apparatus of claim 6, wherein the pose information comprises a heading angle and the position information comprises center point coordinates;
the matching module includes:
the first obtaining submodule is used for obtaining a course angle difference value of the image collecting device relative to the first environment image when the image collecting device collects the second environment image, wherein the first environment image and the second environment image are environment images of continuous frames;
the first selection sub-module is used for selecting candidate second traffic signal sub-images from the second traffic signal sub-images with the same type according to the course angle difference value, the type and the center point coordinate of the first traffic signal sub-image and the center point coordinate of a second traffic signal sub-image with the same type as the first traffic signal sub-image in the second environment image;
and the second selection sub-module is used for taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the obtained candidate second traffic signal sub-images as a second traffic signal sub-image matched with the first traffic signal sub-image.
8. The apparatus of claim 7, wherein the first selection submodule is configured to:
for each first traffic signal sub-image, if the course angle difference value is within a preset course angle difference value range, selecting a second traffic signal sub-image with a central point horizontal coordinate difference value smaller than a preset horizontal coordinate difference value from second traffic signal sub-images with the same type as the candidate second traffic signal sub-image;
if the course angle difference value is larger than the upper limit value of the preset course angle difference value range, and the central point abscissa of the first traffic signal sub-image is located in a first preset abscissa range, selecting a second traffic signal sub-image of which the central point abscissa is located in a second preset abscissa range from second traffic signal sub-images of the same type as the candidate second traffic signal sub-image;
and if the course angle difference value is smaller than the lower limit value of the preset course angle difference value range and the central point abscissa of the first traffic signal sub-image is located in the second preset abscissa range, selecting a second traffic signal sub-image with the central point abscissa located in the first preset abscissa range from the second traffic signal sub-images with the same type as the candidate second traffic signal sub-image.
9. The apparatus according to claim 7 or 8, wherein the pose information further includes elevation information;
the matching module further comprises:
the second acquisition submodule is used for acquiring an elevation difference value of the second environment image relative to the first environment image;
the first screening submodule is used for screening out candidate second traffic signal sub-images, of which the vertical coordinate difference value of the center point of the first traffic signal sub-image is larger than the preset vertical coordinate difference value, from the obtained candidate second traffic signal sub-images if the elevation difference value is within the preset elevation range, so as to obtain new candidate second traffic signal sub-images;
the second screening sub-module is used for screening out candidate second traffic signal sub-images of which the vertical coordinate of the central point of the first traffic signal sub-image is smaller than the preset vertical coordinate difference value from the obtained candidate second traffic signal sub-images if the elevation difference value exceeds the preset elevation range, so as to obtain new candidate second traffic signal sub-images;
the second selection submodule is used for:
and taking the candidate second traffic signal sub-image with the minimum distance between the central point and the central point of the first traffic signal sub-image in the new candidate second traffic signal sub-image as a second traffic signal sub-image matched with the first traffic signal sub-image.
10. The apparatus of claim 6, wherein the location information comprises center point coordinates;
the determining module comprises:
and the determining submodule is used for determining the position of the traffic signal lamp according to the coordinates of the central point of the area of the traffic signal on two continuous frames of environment images and the pose information of the image acquisition device when the two continuous adjacent frames of environment images are respectively acquired on the basis of a triangulation algorithm aiming at each traffic signal.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
12. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 5.
CN201910356683.7A 2019-04-29 2019-04-29 Traffic signal identification method and device, storage medium and electronic equipment Active CN110795977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910356683.7A CN110795977B (en) 2019-04-29 2019-04-29 Traffic signal identification method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910356683.7A CN110795977B (en) 2019-04-29 2019-04-29 Traffic signal identification method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110795977A true CN110795977A (en) 2020-02-14
CN110795977B CN110795977B (en) 2020-09-04

Family

ID=69426848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910356683.7A Active CN110795977B (en) 2019-04-29 2019-04-29 Traffic signal identification method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110795977B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113074749A (en) * 2021-06-07 2021-07-06 湖北亿咖通科技有限公司 Road condition detection and update method, electronic equipment and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176287A (en) * 2011-02-28 2011-09-07 无锡中星微电子有限公司 Traffic signal lamp identifying system and method
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108229250A (en) * 2016-12-14 2018-06-29 杭州海康威视数字技术股份有限公司 Traffic lights method for relocating and device
EP3364342A1 (en) * 2017-02-17 2018-08-22 Cogisen SRL Method for image processing and video compression
CN108831168A (en) * 2018-06-01 2018-11-16 江苏数翰信息科技有限公司 A kind of method for controlling traffic signal lights and system based on association crossing visual identity
CN108875608A (en) * 2018-06-05 2018-11-23 合肥湛达智能科技有限公司 A kind of automobile traffic signal recognition method based on deep learning
CN109460715A (en) * 2018-10-18 2019-03-12 大唐网络有限公司 A kind of traffic lights automatic identification implementation method based on machine learning
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN109508580A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Traffic lights recognition methods and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176287A (en) * 2011-02-28 2011-09-07 无锡中星微电子有限公司 Traffic signal lamp identifying system and method
CN108229250A (en) * 2016-12-14 2018-06-29 杭州海康威视数字技术股份有限公司 Traffic lights method for relocating and device
EP3364342A1 (en) * 2017-02-17 2018-08-22 Cogisen SRL Method for image processing and video compression
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN109508580A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Traffic lights recognition methods and device
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108831168A (en) * 2018-06-01 2018-11-16 江苏数翰信息科技有限公司 A kind of method for controlling traffic signal lights and system based on association crossing visual identity
CN108875608A (en) * 2018-06-05 2018-11-23 合肥湛达智能科技有限公司 A kind of automobile traffic signal recognition method based on deep learning
CN109460715A (en) * 2018-10-18 2019-03-12 大唐网络有限公司 A kind of traffic lights automatic identification implementation method based on machine learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RUTURAJ KULKARNI: ""Traffic Light Detection and Recognition for Self"", 《IEEE》 *
ZHENCHAO OUYANG: ""Deep CNN-Based Real-Time Traffic Light"", 《IEEE TRANSACTIONS ON MOBILE COMPUTING》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113074749A (en) * 2021-06-07 2021-07-06 湖北亿咖通科技有限公司 Road condition detection and update method, electronic equipment and computer-readable storage medium
CN113074749B (en) * 2021-06-07 2021-08-20 湖北亿咖通科技有限公司 Road condition detection and update method, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN110795977B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN110146097B (en) Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server
US11738770B2 (en) Determination of lane connectivity at traffic intersections for high definition maps
CN110795514B (en) Road element identification and road network construction method, device, storage medium and electronic equipment
KR102266830B1 (en) Lane determination method, device and storage medium
DE602004002048T2 (en) Device, system and method for signaling the traffic situation
CN112069856A (en) Map generation method, driving control method, device, electronic equipment and system
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN110796714B (en) Map construction method, device, terminal and computer readable storage medium
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN111874006A (en) Route planning processing method and device
CN112543956A (en) Method and device for providing road congestion reason
US20240077331A1 (en) Method of predicting road attributers, data processing system and computer executable code
CN114758086A (en) Method and device for constructing urban road information model
JP2014215205A (en) Information processing device, server device, information processing method, information processing system and information processing program
CN110795977B (en) Traffic signal identification method and device, storage medium and electronic equipment
CN112765302B (en) Method and device for processing position information and computer readable medium
CN113297878B (en) Road intersection identification method, device, computer equipment and storage medium
CN111325811B (en) Lane line data processing method and processing device
CN111661054B (en) Vehicle control method, device, electronic device and storage medium
JP2018010320A (en) Server device, terminal device, information processing method, information processing system, and information processing program
CN116311015A (en) Road scene recognition method, device, server, storage medium and program product
CN108399357A (en) A kind of Face detection method and device
JP6224343B2 (en) Server apparatus, information processing method, information processing system, and information processing program
CN112543949A (en) Discovering and evaluating meeting locations using image content analysis
KR20190085247A (en) System and data processing method for providing high definition map service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086

Patentee after: Beijing Wuyi Vision digital twin Technology Co.,Ltd.

Address before: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086

Patentee before: DANGJIA MOBILE GREEN INTERNET TECHNOLOGY GROUP Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220916

Address after: Room 315, 3rd Floor, Supporting Public Building, Mantingfangyuan Community, Qingyunli, Haidian District, Beijing 100000

Patentee after: Everything mirror (Beijing) computer system Co.,Ltd.

Address before: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086

Patentee before: Beijing Wuyi Vision digital twin Technology Co.,Ltd.