CN112861832B - Traffic identification detection method and device, electronic equipment and storage medium - Google Patents

Traffic identification detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112861832B
CN112861832B CN202110445372.5A CN202110445372A CN112861832B CN 112861832 B CN112861832 B CN 112861832B CN 202110445372 A CN202110445372 A CN 202110445372A CN 112861832 B CN112861832 B CN 112861832B
Authority
CN
China
Prior art keywords
traffic
image plane
object group
static object
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110445372.5A
Other languages
Chinese (zh)
Other versions
CN112861832A (en
Inventor
李博文
王春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN202110445372.5A priority Critical patent/CN112861832B/en
Publication of CN112861832A publication Critical patent/CN112861832A/en
Application granted granted Critical
Publication of CN112861832B publication Critical patent/CN112861832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Abstract

The embodiment of the invention provides a method and a device for detecting traffic signs, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a current image shot by a camera on a vehicle and current pose information of the vehicle; carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane; determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding region; obtaining world coordinates of each second traffic identification in the second static object group based on the current pose information of the vehicle; determining an image plane point cloud of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identification; performing element matching based on the image plane point cloud of the image and the image plane point clouds of the maps; if the matching is successful, obtaining the attribute information of each target traffic identification from the map and obtaining the color information of each target traffic identification from the current image. By adopting the method, more accurate traffic identification information is obtained.

Description

Traffic identification detection method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and a device for detecting a traffic sign of an automatic driving vehicle in the driving process, electronic equipment and a storage medium.
Background
In the process of driving an automatic driving vehicle on a road, traffic signs on the road, such as traffic lights or traffic signs, are often required to be detected to obtain attribute information of the automatic driving vehicle, and further, the automatic driving vehicle performs the next driving operation based on the attribute information.
At present, an automatic driving vehicle detects a traffic sign on a road, mainly detects an image shot by a camera in the vehicle to obtain attribute information of the vehicle, and further performs the next driving operation based on the attribute information. For example: based on image processing, the traffic lights are identified from the images shot by the camera, the color and attribute information (including the lane information to which the traffic lights belong, the steering information of the traffic lights and the like) of the traffic lights are obtained, and whether the vehicle needs to be steered in the next driving operation and passes through an intersection or the like is determined according to the color and attribute information. For another example: based on image processing, the speed-limiting signboard is identified from the image shot by the camera, and whether the vehicle needs to be accelerated or decelerated in the next driving operation or not is determined according to the speed-limiting value in the speed-limiting signboard.
However, since the image captured by the camera may not be clear enough during the driving of the autonomous vehicle, some information may be identified incorrectly during the image identification process, which results in that the attribute information of the traffic sign obtained by means of image detection is not accurate enough.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for detecting a traffic sign, electronic equipment and a storage medium, so that more accurate attribute information of the traffic sign can be obtained when a vehicle is driven automatically.
In order to achieve the above object, an embodiment of the present invention provides a method for detecting a traffic identifier, including:
acquiring a current image shot by a camera on a vehicle, and acquiring current pose information of the vehicle;
carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane;
acquiring world coordinates of each second traffic identification in the second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle;
determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding area of each first traffic identification in the image plane;
determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications;
performing element matching on each first traffic identification in the first static object group and each second traffic identification in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map;
and if each first traffic identification in the first static object group is successfully matched with each second traffic identification in the second static object group, taking each first traffic identification as a target traffic identification, obtaining attribute information of each target traffic identification from the attribute information of each traffic identification pre-stored in a map, and obtaining color information of each target traffic identification according to the current image.
Further, after the element matching is performed on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map, the method further includes:
and if the matching of each first traffic identification in the first static object group and each second traffic identification in the second static object group is unsuccessful, acquiring the attribute information and the color information of each first traffic identification according to the current image.
Further, the determining an image plane point cloud of the image corresponding to the first static object group based on the corresponding area of each first traffic sign on the image plane includes:
aiming at each first traffic identification in a first static object group in the current image, calculating the average coordinate of each pixel point corresponding to the outline of the first traffic identification according to the corresponding area of the first traffic identification in the image plane, and taking the pixel point at the average coordinate as the point corresponding to the first traffic identification;
and taking points corresponding to the first traffic identifications together as image plane point clouds of the images.
Further, the determining, based on the world coordinates of the part or all of the second traffic sign, an image plane point cloud of at least one map corresponding to the second static object group includes:
determining the corresponding coordinate point of each second traffic identification in the second static object group after the second traffic identification is projected to the two-dimensional image plane based on the world coordinate of each second traffic identification, the current pose information of the vehicle and the preset parameters by adopting the following formula:
Figure 873776DEST_PATH_IMAGE001
wherein ds is the scale; x, y and z are respectively an abscissa, an ordinate and a vertical coordinate in world coordinates of the second traffic sign; u and v are corresponding coordinate points of the second traffic sign projected to the two-dimensional image plane and then on the two-dimensional image plane;
Figure 853233DEST_PATH_IMAGE002
current pose information of the vehicle;
Figure 632970DEST_PATH_IMAGE003
for preset parameters, it is expressed that: calibrating parameters from the origin of the current vehicle positioning coordinate system to the camera; determining combinations to be matched according to k types of second traffic identifications in the second static object group to obtain f combinations to be matched; wherein the content of the first and second substances,
Figure 28179DEST_PATH_IMAGE004
the first static object group comprises k types of first traffic signs, and the number of the k types of first traffic signs is respectively as follows: { P1,P2,……,Pk}; the second static object group comprises k types of second traffic signs, and the number of the k types of second traffic signs is respectively as follows: { Q1,Q2,……,Qk}; and the number is PiThe first traffic sign and the quantity of QiThe second traffic sign is a traffic sign of the same type,
Figure 842552DEST_PATH_IMAGE005
(ii) a For each combination to be matched, taking corresponding coordinate points of each second traffic identification in the combination to be matched on the two-dimensional image plane as image plane point clouds of the combination to be matched together;
and determining each image plane point cloud to be matched and combined as the image plane point cloud of the map corresponding to the second static object group.
Further, the element matching, based on the image plane point cloud of the image and the image plane point cloud of each map, each first traffic identifier in the first static object group and each second traffic identifier in the second static object group includes:
aiming at the image plane point cloud of each map, the iteration times are 0, and aiming at each point in the image plane point cloud of the map, a point closest to the point is determined from the image plane point cloud of the image;
taking a point pair formed by the point and a point closest to the point as a point pair to be matched to obtain at least one point pair to be matched;
calculating rigid body transformation which enables the average distance of each point pair to be matched to be minimum according to each point pair to be matched to obtain translation parameters and rotation parameters of the rigid body transformation;
carrying out translation transformation and rotation transformation on the image plane point cloud of the map based on the translation parameter and the rotation parameter to obtain the image plane point cloud of the transformed map;
calculating the average distance value between each corresponding point in the image plane point cloud of the map and the image plane point cloud of the transformed map;
judging whether the distance average value is smaller than a first preset distance threshold value or not;
if so, determining the distance average value as the distance average value corresponding to the image plane point cloud of the map, and recording the translation parameter and the rotation parameter as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map;
if not, adding 1 to the iteration times, returning to each point in the image plane point cloud of the map, and determining the point closest to the point from the image plane point cloud of the image; stopping iteration until the iteration times reach the preset iteration times, and taking the distance average value obtained by the last iteration as the distance average value corresponding to the image plane point cloud of the map; recording the translation parameter and the rotation parameter obtained by the last iteration as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map;
selecting the corresponding map image plane point cloud with the minimum distance average value from the image plane point clouds of the maps as the image plane point cloud of the target map;
judging whether the distance average value corresponding to the image plane point cloud of the target map is smaller than a second preset distance threshold value or not;
if yes, determining that each first traffic identification in the first static object group and each second traffic identification element in the second static object group are successfully matched;
and if not, determining that the matching of each first traffic identification element in the first static object group and each second traffic identification element in the second static object group is unsuccessful.
Further, after the determining that the matching of the first traffic identifiers in the first static object group and the second traffic identifier elements in the second static object group is successful, the method further includes:
performing translation transformation and rotation transformation on the image plane point cloud of the target map based on translation parameters and rotation parameters corresponding to the image plane point cloud of the target map to obtain the image plane point cloud of the transformed target map;
performing one-to-one corresponding matching on each first traffic identification corresponding to each point in the image plane point cloud of the image and each second traffic identification corresponding to each point in the image plane point cloud of the converted target map by using a Hungarian algorithm to obtain at least one successfully matched target traffic identification; wherein each first traffic identification is matched to a second traffic identification; and taking the matched first traffic identification or second traffic identification as the target traffic identification.
In order to achieve the above object, an embodiment of the present invention provides a device for detecting a traffic sign, including:
the image information acquisition module is used for acquiring a current image shot by a camera in the vehicle and acquiring current pose information of the vehicle;
the first static object group acquisition module is used for carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane;
the first point cloud determining module is used for determining image plane point clouds of images corresponding to the first static object group based on corresponding areas of the first traffic identifications in the image plane;
the second point cloud determining module is used for acquiring the world coordinates of each second traffic identification in the second static object group from the region of interest of the map used by the vehicle based on the current pose information of the vehicle; determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications;
the matching module is used for performing element matching on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map;
and the attribute information acquisition module is used for taking each first traffic identification as a target traffic identification if each first traffic identification in the first static object group is successfully matched with each second traffic identification in the second static object group, acquiring the attribute information of each target traffic identification from the attribute information of each traffic identification pre-stored in a map, and acquiring the color information of each target traffic identification according to the current image.
In order to achieve the above object, an embodiment of the present invention provides an electronic device, which includes a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface are configured to complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the detection method of any traffic sign when executing the program stored in the memory.
In order to achieve the above object, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the above steps of the method for detecting a traffic sign.
In order to achieve the above object, an embodiment of the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to perform any of the above described steps of the method for detecting a traffic sign.
The embodiment of the invention has the following beneficial effects:
by adopting the method provided by the embodiment of the invention, the current image shot by the camera on the vehicle is obtained, and the current pose information of the vehicle is obtained; carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane; acquiring world coordinates of each second traffic identification in the second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle; determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding area of each first traffic identification in the image plane; determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications; performing element matching on each first traffic identification in the first static object group and each second traffic identification in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map; and if the first traffic identifications in the first static object group are successfully matched with the second traffic identifications in the second static object group, taking the first traffic identifications as target traffic identifications, obtaining attribute information of the target traffic identifications from the attribute information of the traffic identifications prestored in the map, and obtaining color information of the target traffic identifications according to the current image. Matching each first traffic identification in the first static object group with each second traffic identification in the second static object group to obtain a successfully matched target traffic identification; the attribute information of each target traffic identification is obtained from the attribute information of each traffic identification pre-stored in the map, so that the problem that the acquired attribute information of the traffic identification is not accurate enough due to the fact that the image shot by the camera is not clear enough is solved, and more accurate attribute information of the traffic identification can be obtained when the vehicle is driven automatically.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting a traffic sign according to an embodiment of the present invention;
fig. 2 is another flowchart of a method for detecting a traffic sign according to an embodiment of the present invention;
FIG. 3 is a current image taken by a camera in a vehicle;
FIG. 4 is a schematic diagram of the corresponding area of each first traffic sign in the first static object group in the image plane;
FIG. 5 is a flow chart of traffic sign matching;
fig. 6 is a schematic structural diagram of a detection apparatus for a traffic sign according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
Fig. 1 is a flowchart of a method for detecting a traffic sign according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
step 101, acquiring a current image shot by a camera on a vehicle, and acquiring current pose information of the vehicle.
And 102, carrying out object detection on the current image to obtain corresponding areas of the first traffic identifications in the first static object group on the image plane.
And 103, acquiring the world coordinates of each second traffic identification in the second static object group from the region of interest of the map used by the vehicle based on the current pose information of the vehicle.
And 104, determining image plane point clouds of images corresponding to the first static object group based on the corresponding areas of the first traffic marks in the image plane.
And 105, determining the image plane point cloud of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic marks.
And 106, performing element matching on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map.
And 107, if the first traffic identifications in the first static object group are successfully matched with the second traffic identifications in the second static object group, taking the first traffic identifications as target traffic identifications, obtaining attribute information of the target traffic identifications from the attribute information of the traffic identifications stored in a map in advance, and obtaining color information of the target traffic identifications according to the current image.
By adopting the method provided by the embodiment of the invention, the current image shot by the camera on the vehicle is obtained, and the current pose information of the vehicle is obtained; carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane; acquiring world coordinates of each second traffic identification in the second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle; determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding area of each first traffic identification in the image plane; determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications; performing element matching on each first traffic identification in the first static object group and each second traffic identification in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map; and if the first traffic identifications in the first static object group are successfully matched with the second traffic identifications in the second static object group, taking the first traffic identifications as target traffic identifications, obtaining attribute information of the target traffic identifications from the attribute information of the traffic identifications prestored in the map, and obtaining color information of the target traffic identifications according to the current image. Matching each first traffic identification in the first static object group with each second traffic identification in the second static object group to obtain a successfully matched target traffic identification; the attribute information of each target traffic identification is obtained from the attribute information of each traffic identification pre-stored in the map, so that the problem that the acquired attribute information of the traffic identification is not accurate enough due to the fact that the image shot by the camera is not clear enough is solved, and more accurate attribute information of the traffic identification can be obtained when the vehicle is driven automatically.
Fig. 2 is another flow of the method for detecting a traffic sign according to the embodiment of the present invention, as shown in fig. 2, including the following steps:
step 201, acquiring a current image shot by a camera in the vehicle, and acquiring current pose information of the vehicle.
The automatic driving vehicle is provided with a camera which can shoot images in front of the vehicle and/or around the vehicle. In this step, an image in front of the vehicle may be photographed as a photographed current image during the automatic traveling of the vehicle. And current pose information of the vehicle can be acquired from a positioning system of the vehicle. The current pose information of the vehicle may be used to transform objects in the world coordinate system to the current vehicle positioning coordinate system.
Step 202, performing object detection on the current image to obtain a corresponding area of each first traffic sign in the first static object group on the image plane.
In the embodiment of the present invention, the traffic identifier may include: traffic lights, speed limit signs, traffic indication signs (e.g., straight signs and left turn signs), and the like.
In this step, each first traffic identifier in the current image may be detected by using the image detection model, and a corresponding region of each first traffic identifier in the current image may be determined, and each detected first traffic identifier may be used as the first static object group. The image detection model can be any existing neural network model for image detection.
For example, fig. 3 is a current image taken by a camera in a vehicle. Referring to fig. 3, the first traffic sign captured in the current image includes: traffic light 301-traffic light 306, traffic sign 307. And the detected traffic lights 301-306 and traffic sign 307 are taken together as a first static group of objects.
In this step, the image detection model may be used to determine the corresponding area of each first traffic sign in the current image, for example, the corresponding area of the traffic light 301 in fig. 3 in the image may be determined.
And step 203, acquiring the world coordinates of each second traffic identification in the second static object group from the region of interest of the map used by the vehicle based on the current pose information of the vehicle.
In an embodiment of the invention, the second static object group comprises at least one second traffic sign.
In this step, the current position of the vehicle can be determined according to the current vehicle pose, and then a region of interest (ROI) can be queried in a map used by the vehicle according to the current position of the vehicle. Further, world coordinates of each second traffic identification in the ROI may be looked up and used as a second static object group.
The ROI may be a region from the current position of the vehicle to a position 100 meters ahead of the vehicle. The map used by the vehicle may be a high-precision map, or the map used by the vehicle may be: any map containing attribute information of traffic signs in the current driving area of the vehicle.
Step 204, for each first traffic identification in the first static object group in the current image, calculating the average coordinate of each pixel point corresponding to the contour of the first traffic identification according to the corresponding area of the first traffic identification in the image plane, and taking the pixel point at the average coordinate as the point corresponding to the first traffic identification; and the points corresponding to the first traffic identifications are used as the image plane point cloud of the image together.
For example, fig. 4 is a schematic diagram of a corresponding area of each first traffic sign in the first static object group in the image plane. Referring to fig. 4, the pixel points corresponding to the contour of the first traffic sign 401 include: pixel point a, pixel point b, pixel point c and pixel point d; and the coordinates of the pixel point a, the pixel point b, the pixel point c and the pixel point d in the pixel coordinate system are respectively
Figure 727331DEST_PATH_IMAGE006
Figure 994364DEST_PATH_IMAGE007
Figure 193264DEST_PATH_IMAGE008
Figure 127722DEST_PATH_IMAGE009
. For the first traffic sign 401, the average coordinates of the pixel point a, the pixel point b, the pixel point c and the pixel point d corresponding to the contour of the first traffic sign 401 can be calculated
Figure 121086DEST_PATH_IMAGE010
Figure 875416DEST_PATH_IMAGE011
Taking the point corresponding to the average coordinate as the point corresponding to the first traffic identifier 401; based on the same method, the points corresponding to the first traffic sign 402-the first traffic sign 404 can be respectively calculated; the points corresponding to the first traffic sign 401 and 404 are used as the image plane point cloud of the image together.
In the embodiment of the present invention, determining the point cloud of the image plane of the image may specifically be: calculating the average coordinate of each pixel point corresponding to each first traffic identification in a first static object group in the current image, and taking the point at the average coordinate as the point corresponding to the first traffic identification; and the points corresponding to the first traffic identifications are used as the image plane point cloud of the image together.
For example, referring to fig. 4, the corresponding pixel points of the first traffic identifier 402 in the image plane include: a pixel o, a pixel p and a pixel q; and the coordinates of the pixel point o, the pixel point p and the pixel point q in the pixel coordinate system are respectively
Figure 612427DEST_PATH_IMAGE012
Figure 510981DEST_PATH_IMAGE013
And
Figure 940825DEST_PATH_IMAGE014
. For the first traffic identifier 402, the average coordinates of the pixel o, the pixel p, and the pixel q corresponding to the first traffic identifier 402 may be calculated
Figure 182450DEST_PATH_IMAGE015
Figure 457574DEST_PATH_IMAGE016
Taking the point corresponding to the average coordinate as the point corresponding to the first traffic identifier 402; points corresponding to the first traffic identifier 401, the first traffic identifier 403 and the first traffic identifier 404 can be respectively calculated based on the same method; the points corresponding to the first traffic sign 401 and 404 are used as the image plane point cloud of the image together.
And step 205, determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications.
In the embodiment of the present invention, the following formula may be adopted, and after determining that each second traffic identifier in the second static object group is projected onto the two-dimensional image plane, the corresponding coordinate point on the two-dimensional image plane may be determined based on the world coordinate of each second traffic identifier, the current pose information of the vehicle, and the preset parameter:
Figure 163362DEST_PATH_IMAGE017
wherein ds is the scale; x, y and z are respectively an abscissa, an ordinate and a vertical coordinate in world coordinates of the second traffic sign; u and v are corresponding coordinate points of the second traffic sign projected to the two-dimensional image plane and then on the two-dimensional image plane;
Figure 498528DEST_PATH_IMAGE018
current pose information of the vehicle;
Figure 227450DEST_PATH_IMAGE019
for preset parameters, it is expressed that: and calibrating parameters from the origin of the current vehicle positioning coordinate system to the camera.
After determining the corresponding coordinate points of the second traffic signs on the two-dimensional image plane, the embodiment of the present invention may determine the image plane point cloud of the at least one map corresponding to the second static object group by using the following method described in step a 1-step A3:
step A1: and determining the combinations to be matched according to the k types of second traffic identifications in the second static object group to obtain f combinations to be matched.
Wherein the content of the first and second substances,
Figure 368581DEST_PATH_IMAGE020
the first static object group comprises k types of first traffic signs, and the number of the k types of first traffic signs is respectively as follows: { P1,P2,……,Pk}; the second static object group comprises k types of second traffic signs, and the number of the k types of second traffic signs is respectively as follows: { Q1,Q2,……,Qk}; and the number is PiThe first traffic sign and the quantity of QiThe second traffic sign is a traffic sign of the same type,
Figure 866558DEST_PATH_IMAGE021
step A2: and for each combination to be matched, taking the corresponding coordinate points of the second traffic identifications in the combination to be matched on the two-dimensional image plane as the image plane point cloud of the combination to be matched.
Step A3: and determining each image plane point cloud to be matched and combined as the image plane point cloud of the map corresponding to the second static object group.
For example, if the second static object group includes 2 types of second traffic identifiers: the second sign and the second traffic light are respectively Q1And Q2(ii) a The second static group also includes 2 types of first traffic signs: the number of the first signs and the first traffic lights are respectively P1And P2. The sign board can be a traffic sign board, a speed limit sign board and the like. Can be added to the second static group every P1A different second label and each P2The combination formed by different second traffic lights is used as the combination to be matched to obtain
Figure 638205DEST_PATH_IMAGE022
And (4) combining the signals to be matched.
Can be directed to
Figure 588844DEST_PATH_IMAGE023
And each to-be-matched combination in the to-be-matched combinations is used as an image plane point cloud of the to-be-matched combination together with the corresponding coordinate point of each second traffic identification in the to-be-matched combination on the two-dimensional image plane, and the image plane point clouds of the to-be-matched combinations are determined as the image plane point clouds of the map corresponding to the second static object group.
For example, if the first static group comprises: a first signage a1, a first traffic light a2, and a first traffic light a3, the second static object group comprising: a second sign b1, a second traffic light b2, a second traffic light b3, and a second traffic light b 4. 1 second sign and 2 different second traffic lights in the second static object groupAs the combination to be matched, to obtain
Figure 35131DEST_PATH_IMAGE024
And (4) combining the signals to be matched. The combination to be matched 1 comprises: a second sign b1, a second traffic light b2, and a second traffic light b 3; the combination to be matched 2 comprises: a second sign b1, a second traffic light b2, and a second traffic light b 4; the combination to be matched 3 comprises: a second sign b1, a second traffic light b3, and a second traffic light b 4.
Taking the combination 1 to be matched as an example, the corresponding coordinate points of the second sign b1, the second traffic light b2 and the second traffic light b3 in the combination 1 to be matched on the two-dimensional image plane are collectively used as the point cloud of the image plane of the combination 1 to be matched. In the same way, the image plane point cloud of the combination 2 to be matched and the image plane point cloud of the combination 3 to be matched can be determined. And determining the image plane point cloud of the combination 1 to be matched, the image plane point cloud of the combination 2 to be matched and the image plane point cloud of the combination 3 to be matched as the image plane point cloud of the map corresponding to the second static object group.
And step 206, performing element matching on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map.
And step 207, if each first traffic identification in the first static object group is successfully matched with each second traffic identification in the second static object group, taking each first traffic identification as a target traffic identification, obtaining attribute information of each target traffic identification from the attribute information of each traffic identification pre-stored in the map, and obtaining color information of each target traffic identification according to the current image.
In the embodiment of the invention, if the target traffic identification is a traffic light, the attribute information of the target traffic identification acquired from the map stored in advance can comprise lane information to which the traffic light belongs, steering information of the traffic light and the like; and if the target traffic sign is the speed limit sign, the attribute information of the target traffic sign, which is acquired from the map stored in advance, is the speed limit value in the speed limit sign.
And 208, if the first traffic identifications in the first static object group are unsuccessfully matched with the second traffic identifications in the second static object group, acquiring the attribute information and the color information of the first traffic identifications according to the current image.
If the first traffic sign comprises a traffic light and a speed-limiting signboard, in particular, lane information, steering information, color information and the like to which the traffic light belongs can be acquired from the current image; if the speed limit value in the speed limit signboard is obtained from the current image.
In the implementation of the present invention, fig. 5 is a flow of traffic sign matching. Referring to fig. 5, in step 206, based on the image plane point cloud of the image and the image plane point clouds of the maps, performing element matching on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group, which may specifically include the following steps:
step 501, aiming at the image plane point cloud of each map, the iteration frequency is 0, and aiming at each point in the image plane point cloud of the map, a point closest to the point is determined from the image plane point cloud of the image.
Step 502, a point pair formed by the point and a point closest to the point is taken as a point pair to be matched, and at least one point pair to be matched is obtained.
Step 503, according to each point pair to be matched, calculating the rigid body transformation which enables the average distance of each point pair to be matched to be minimum, and obtaining the translation parameter and the rotation parameter of the rigid body transformation.
The rigid body transformation can be performed on the point pair matrix formed by each point pair to be matched. Specifically, the point pair matrix may be subjected to rigid body transformation by using an existing rigid body transformation method, which is not limited herein.
And 504, performing translation transformation and rotation transformation on the image plane point cloud of the map based on the translation parameter and the rotation parameter to obtain the image plane point cloud of the transformed map.
Specifically, in this step, the image plane point cloud of the map may be subjected to translation transformation according to the translation parameters of the rigid body transformation, and then the image plane point cloud of the map may be subjected to rotation transformation according to the rotation parameters of the rigid body transformation, so as to obtain the image plane point cloud of the transformed map. Or, in this step, the image plane point cloud of the map may be subjected to rotation transformation according to the rotation parameter of the rigid body transformation, and then the image plane point cloud of the map may be subjected to translation transformation according to the translation parameter of the rigid body transformation, so as to obtain the image plane point cloud of the transformed map.
Step 505, calculating the average distance between each corresponding point in the image plane point cloud of the map and the image plane point cloud of the transformed map.
Step 506, determining whether the distance average value is smaller than a first preset distance threshold, if so, executing step 507, and if not, executing step 508.
The first preset distance threshold may be specifically set according to practical applications, and is not limited herein.
And 507, determining the distance average value as the distance average value corresponding to the image plane point cloud of the map, and recording the translation parameter and the rotation parameter as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map. Step 509 is then performed.
Step 508, adding 1 to the iteration number, and returning to the step 501, where the point closest to each point in the image plane point cloud of the map is determined from the image plane point cloud of the image; stopping iteration until the iteration times reach the preset iteration times, and taking the distance average value obtained by the last iteration as the distance average value corresponding to the image plane point cloud of the map; and recording the translation parameter and the rotation parameter obtained by the last iteration as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map.
The preset number of iterations may be specifically set according to the actual application, for example, 30 or 40, and the like.
In step 509, the image plane point cloud of the map with the smallest distance average is selected from the image plane point clouds of the maps as the image plane point cloud of the target map.
Step 510, determining whether the distance average value corresponding to the image plane point cloud of the target map is smaller than a second preset distance threshold, if so, executing step 511, and if not, executing step 512.
The second preset distance threshold may be specifically set according to practical applications, and is not limited herein. The first preset distance threshold and the second preset distance threshold may be the same or different.
And 511, determining that the matching of each first traffic identification in the first static object group and each second traffic identification element in the second static object group is successful.
In the embodiment of the present invention, after determining that the element matching is successful, the steps B1-B2 may be further performed:
step B1: and carrying out translation transformation and rotation transformation on the image plane point cloud of the target map based on the translation parameter and the rotation parameter corresponding to the image plane point cloud of the target map to obtain the transformed image plane point cloud of the target map.
Step B2: performing one-to-one corresponding matching on each first traffic identification corresponding to each point in the image plane point cloud of the image and each second traffic identification corresponding to each point in the image plane point cloud of the converted target map by using a Hungarian algorithm to obtain at least one successfully matched target traffic identification; wherein each first traffic identification is matched to a second traffic identification; and taking the matched first traffic identification or second traffic identification as the target traffic identification.
And 512, determining that the matching of each first traffic identification in the first static object group and each second traffic identification element in the second static object group is unsuccessful.
By adopting the method provided by the embodiment of the invention, the successfully matched target traffic identification is obtained by matching each first traffic identification in the first static object group with each second traffic identification in the second static object group; the attribute information of each target traffic sign is obtained from the attribute information of each target traffic sign pre-stored in the map, and the color information of each target traffic sign is obtained from the current image, so that the problem that the obtained information of the traffic sign is not accurate enough due to the fact that the image shot by the camera is not clear enough is solved, and more accurate information of the traffic sign can be obtained when the vehicle is automatically driven. Moreover, when the first traffic identification is matched with the second traffic identification, the matching effect is better and the obtained information of each target traffic identification is more accurate because not only a single traffic identification but also the overall distribution condition of the traffic identification is considered.
Based on the same inventive concept, according to the method for detecting a traffic sign provided in the above embodiment of the present invention, correspondingly, another embodiment of the present invention further provides a device for detecting a traffic sign, a schematic structural diagram of which is shown in fig. 6, and the method specifically includes:
an image information obtaining module 601, configured to obtain a current image captured by a camera in a vehicle, and obtain current pose information of the vehicle;
a first static object group obtaining module 602, configured to perform object detection on the current image to obtain a corresponding area of each first traffic identifier in the first static object group on an image plane;
a first point cloud determining module 603, configured to determine, based on a corresponding area of each first traffic identifier in an image plane, an image plane point cloud of an image corresponding to the first static object group;
a second point cloud determining module 604, configured to obtain world coordinates of each second traffic identifier in a second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle; determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications;
a matching module 605, configured to perform element matching on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map;
an attribute information obtaining module 606, configured to, if each first traffic identifier in the first static object group is successfully matched with each second traffic identifier in the second static object group, take each first traffic identifier as a target traffic identifier, obtain attribute information of each target traffic identifier from attribute information of each traffic identifier pre-stored in a map, and obtain color information of each target traffic identifier according to the current image.
Therefore, by adopting the device provided by the embodiment of the invention, the current image shot by the camera on the vehicle is obtained, and the current pose information of the vehicle is obtained; carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane; acquiring world coordinates of each second traffic identification in the second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle; determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding area of each first traffic identification in the image plane; determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications; performing element matching on each first traffic identification in the first static object group and each second traffic identification in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map; and if the first traffic identifications in the first static object group are successfully matched with the second traffic identifications in the second static object group, taking the first traffic identifications as target traffic identifications, obtaining attribute information of the target traffic identifications from the attribute information of the traffic identifications prestored in the map, and obtaining color information of the target traffic identifications according to the current image. Matching each first traffic identification in the first static object group with each second traffic identification in the second static object group to obtain a successfully matched target traffic identification; the attribute information of each target traffic identification is obtained from the attribute information of each traffic identification pre-stored in the map, so that the problem that the acquired attribute information of the traffic identification is not accurate enough due to the fact that the image shot by the camera is not clear enough is solved, and more accurate attribute information of the traffic identification can be obtained when the vehicle is driven automatically.
Further, the attribute information obtaining module 606 is further configured to, if each first traffic identifier in the first static object group is unsuccessfully matched with each second traffic identifier in the second static object group, obtain attribute information and color information of each first traffic identifier according to the current image.
Further, the first point cloud determining module 603 is specifically configured to, for each first traffic identifier in the first static object group in the current image, calculate an average coordinate of each pixel point corresponding to the contour of the first traffic identifier according to a corresponding region of the first traffic identifier in the image plane, and take the pixel point at the average coordinate as a point corresponding to the first traffic identifier; and taking points corresponding to the first traffic identifications together as image plane point clouds of the images.
Further, the second point cloud determining module 604 is configured to: determining the corresponding coordinate point of each second traffic identification in the second static object group after the second traffic identification is projected to the two-dimensional image plane based on the world coordinate of each second traffic identification, the current pose information of the vehicle and the preset parameters by adopting the following formula:
Figure 387615DEST_PATH_IMAGE025
wherein ds is the scale; x, y and z are respectively an abscissa, an ordinate and a vertical coordinate in world coordinates of the second traffic sign; u and v are corresponding coordinate points of the second traffic sign projected to the two-dimensional image plane and then on the two-dimensional image plane;
Figure 330163DEST_PATH_IMAGE026
current pose information of the vehicle;
Figure 768098DEST_PATH_IMAGE027
for preset parameters, it is expressed that: calibrating parameters from the origin of the current vehicle positioning coordinate system to the camera;
determining combinations to be matched according to k types of second traffic identifications in the second static object group to obtain f combinations to be matched; it is composed ofIn (1),
Figure 454294DEST_PATH_IMAGE028
the first static object group comprises k types of first traffic signs, and the number of the k types of first traffic signs is respectively as follows: { P1,P2,……,Pk}; the second static object group comprises k types of second traffic signs, and the number of the k types of second traffic signs is respectively as follows: { Q1,Q2,……,Qk}; and the number is PiThe first traffic sign and the quantity of QiThe second traffic sign is a traffic sign of the same type,
Figure 926864DEST_PATH_IMAGE021
for each combination to be matched, taking corresponding coordinate points of each second traffic identification in the combination to be matched on the two-dimensional image plane as image plane point clouds of the combination to be matched together;
and determining each image plane point cloud to be matched and combined as the image plane point cloud of the map corresponding to the second static object group.
Further, the matching module 605 is specifically configured to:
aiming at the image plane point cloud of each map, the iteration times are 0, and aiming at each point in the image plane point cloud of the map, a point closest to the point is determined from the image plane point cloud of the image;
taking a point pair formed by the point and a point closest to the point as a point pair to be matched to obtain at least one point pair to be matched;
calculating rigid body transformation which enables the average distance of each point pair to be matched to be minimum according to each point pair to be matched to obtain translation parameters and rotation parameters of the rigid body transformation;
carrying out translation transformation and rotation transformation on the image plane point cloud of the map based on the translation parameter and the rotation parameter to obtain the image plane point cloud of the transformed map;
calculating the average distance value between each corresponding point in the image plane point cloud of the map and the image plane point cloud of the transformed map;
judging whether the distance average value is smaller than a first preset distance threshold value or not;
if so, determining the distance average value as the distance average value corresponding to the image plane point cloud of the map, and recording the translation parameter and the rotation parameter as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map;
if not, adding 1 to the iteration times, returning to each point in the image plane point cloud of the map, and determining the point closest to the point from the image plane point cloud of the image; stopping iteration until the iteration times reach the preset iteration times, and taking the distance average value obtained by the last iteration as the distance average value corresponding to the image plane point cloud of the map; recording the translation parameter and the rotation parameter obtained by the last iteration as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map;
selecting the corresponding map image plane point cloud with the minimum distance average value from the image plane point clouds of the maps as the image plane point cloud of the target map;
judging whether the distance average value corresponding to the image plane point cloud of the target map is smaller than a second preset distance threshold value or not;
if yes, determining that each first traffic identification in the first static object group and each second traffic identification element in the second static object group are successfully matched;
and if not, determining that the matching of each first traffic identification element in the first static object group and each second traffic identification element in the second static object group is unsuccessful.
Further, after performing the determination that the matching between the first traffic identifiers in the first static object group and the second traffic identifier elements in the second static object group is successful, the matching module 605 is further configured to:
performing translation transformation and rotation transformation on the image plane point cloud of the target map based on translation parameters and rotation parameters corresponding to the image plane point cloud of the target map to obtain the image plane point cloud of the transformed target map;
performing one-to-one corresponding matching on each first traffic identification corresponding to each point in the image plane point cloud of the image and each second traffic identification corresponding to each point in the image plane point cloud of the converted target map by using a Hungarian algorithm to obtain at least one successfully matched target traffic identification; wherein each first traffic identification is matched to a second traffic identification; and taking the matched first traffic identification or second traffic identification as the target traffic identification.
By adopting the device provided by the embodiment of the invention, the successfully matched target traffic identification is obtained by matching each first traffic identification in the first static object group with each second traffic identification in the second static object group; the attribute information of each target traffic sign is obtained from the attribute information of each target traffic sign pre-stored in the map, and the color information of each target traffic sign is obtained from the current image, so that the problem that the obtained information of the traffic sign is not accurate enough due to the fact that the image shot by the camera is not clear enough is solved, and more accurate information of the traffic sign can be obtained when the vehicle is automatically driven. Moreover, when the first traffic identification is matched with the second traffic identification, the matching effect is better and the obtained information of each target traffic identification is more accurate because not only a single traffic identification but also the overall distribution condition of the traffic identification is considered.
An embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 complete mutual communication through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to implement the following steps when executing the program stored in the memory 703:
acquiring a current image shot by a camera on a vehicle, and acquiring current pose information of the vehicle;
carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane;
acquiring world coordinates of each second traffic identification in the second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle;
determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding area of each first traffic identification in the image plane;
determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications;
performing element matching on each first traffic identification in the first static object group and each second traffic identification in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map;
and if each first traffic identification in the first static object group is successfully matched with each second traffic identification in the second static object group, taking each first traffic identification as a target traffic identification, obtaining attribute information of each target traffic identification from the attribute information of each traffic identification pre-stored in a map, and obtaining color information of each target traffic identification according to the current image.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-mentioned methods for detecting a traffic sign.
In a further embodiment, the present invention also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method for detecting a traffic sign according to any one of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, the electronic device and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. A method for detecting a traffic sign is characterized by comprising the following steps:
acquiring a current image shot by a camera on a vehicle, and acquiring current pose information of the vehicle;
carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane;
acquiring world coordinates of each second traffic identification in the second static object group from an area of interest of a map used by the vehicle based on current pose information of the vehicle;
determining an image plane point cloud of an image corresponding to the first static object group based on the corresponding area of each first traffic identification in the image plane;
determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications;
performing element matching on each first traffic identification in the first static object group and each second traffic identification in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map;
if each first traffic identification in the first static object group is successfully matched with each second traffic identification in the second static object group, each first traffic identification is used as a target traffic identification, attribute information of each target traffic identification is obtained from the attribute information of each traffic identification pre-stored in a map, and color information of each target traffic identification is obtained according to the current image;
the determining of the image plane point cloud of the at least one map corresponding to the second static object group based on the world coordinates of the part or all of the second traffic sign comprises:
determining the corresponding coordinate point of each second traffic identification in the second static object group after the second traffic identification is projected to the two-dimensional image plane based on the world coordinate of each second traffic identification, the current pose information of the vehicle and the preset parameters by adopting the following formula:
Figure 543255DEST_PATH_IMAGE001
wherein ds is the scale; x, y and z are respectively an abscissa, an ordinate and a vertical coordinate in world coordinates of the second traffic sign; u and v are corresponding coordinate points of the second traffic sign projected to the two-dimensional image plane and then on the two-dimensional image plane;
Figure 987006DEST_PATH_IMAGE002
current pose information of the vehicle;
Figure 875328DEST_PATH_IMAGE003
for preset parameters, it is expressed that: calibrating parameters from the origin of the current vehicle positioning coordinate system to the camera;
determining combinations to be matched according to k types of second traffic identifications in the second static object group to obtain f combinations to be matched; wherein the content of the first and second substances,
Figure 692586DEST_PATH_IMAGE004
the first static object group comprises k types of first traffic signs, and the number of the k types of first traffic signs is respectively as follows: { P1,P2,……,Pk}; the second static object group comprises k types of second traffic signs, and the number of the k types of second traffic signs is respectively as follows: { Q1,Q2,……,Qk}; and the number is PiThe first traffic sign and the quantity of QiThe second traffic sign is a traffic sign of the same type,
Figure 982753DEST_PATH_IMAGE005
for each combination to be matched, taking corresponding coordinate points of each second traffic identification in the combination to be matched on the two-dimensional image plane as image plane point clouds of the combination to be matched together;
and determining each image plane point cloud to be matched and combined as the image plane point cloud of the map corresponding to the second static object group.
2. The method of claim 1, further comprising, after said element matching each first traffic identifier in a first static group of objects and each second traffic identifier in a second static group of objects based on said image plane point cloud of said image and said image plane point cloud of each said map:
and if the matching of each first traffic identification in the first static object group and each second traffic identification in the second static object group is unsuccessful, acquiring the attribute information and the color information of each first traffic identification according to the current image.
3. The method of claim 1, wherein determining an image plane point cloud of the image corresponding to the first static object group based on the corresponding area of each first traffic marker at the image plane comprises:
aiming at each first traffic identification in a first static object group in the current image, calculating the average coordinate of each pixel point corresponding to the outline of the first traffic identification according to the corresponding area of the first traffic identification in the image plane, and taking the pixel point at the average coordinate as the point corresponding to the first traffic identification;
and taking points corresponding to the first traffic identifications together as image plane point clouds of the images.
4. The method of claim 1, wherein the element matching each first traffic identifier in a first static object group and each second traffic identifier in a second static object group based on the image plane point cloud of the image and the image plane point cloud of each map comprises:
aiming at the image plane point cloud of each map, the iteration times are 0, and aiming at each point in the image plane point cloud of the map, a point closest to the point is determined from the image plane point cloud of the image;
taking a point pair formed by the point and a point closest to the point as a point pair to be matched to obtain at least one point pair to be matched;
calculating rigid body transformation which enables the average distance of each point pair to be matched to be minimum according to each point pair to be matched to obtain translation parameters and rotation parameters of the rigid body transformation;
carrying out translation transformation and rotation transformation on the image plane point cloud of the map based on the translation parameter and the rotation parameter to obtain the image plane point cloud of the transformed map;
calculating the average distance value between each corresponding point in the image plane point cloud of the map and the image plane point cloud of the transformed map;
judging whether the distance average value is smaller than a first preset distance threshold value or not;
if so, determining the distance average value as the distance average value corresponding to the image plane point cloud of the map, and recording the translation parameter and the rotation parameter as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map;
if not, adding 1 to the iteration times, returning to each point in the image plane point cloud of the map, and determining the point closest to the point from the image plane point cloud of the image; stopping iteration until the iteration times reach the preset iteration times, and taking the distance average value obtained by the last iteration as the distance average value corresponding to the image plane point cloud of the map; recording the translation parameter and the rotation parameter obtained by the last iteration as the translation parameter and the rotation parameter corresponding to the image plane point cloud of the map;
selecting the corresponding map image plane point cloud with the minimum distance average value from the image plane point clouds of the maps as the image plane point cloud of the target map;
judging whether the distance average value corresponding to the image plane point cloud of the target map is smaller than a second preset distance threshold value or not;
if yes, determining that each first traffic identification in the first static object group and each second traffic identification element in the second static object group are successfully matched;
and if not, determining that the matching of each first traffic identification element in the first static object group and each second traffic identification element in the second static object group is unsuccessful.
5. The method of claim 4, wherein after determining that the matching of the first traffic identification elements of the first static group and the second traffic identification elements of the second static group is successful, further comprising:
performing translation transformation and rotation transformation on the image plane point cloud of the target map based on translation parameters and rotation parameters corresponding to the image plane point cloud of the target map to obtain the image plane point cloud of the transformed target map;
performing one-to-one corresponding matching on each first traffic identification corresponding to each point in the image plane point cloud of the image and each second traffic identification corresponding to each point in the image plane point cloud of the converted target map by using a Hungarian algorithm to obtain at least one successfully matched target traffic identification; wherein each first traffic identification is matched to a second traffic identification; and taking the matched first traffic identification or second traffic identification as the target traffic identification.
6. A traffic sign detection device, comprising:
the image information acquisition module is used for acquiring a current image shot by a camera in the vehicle and acquiring current pose information of the vehicle;
the first static object group acquisition module is used for carrying out object detection on the current image to obtain corresponding areas of all first traffic identifications in the first static object group on an image plane;
the first point cloud determining module is used for determining image plane point clouds of images corresponding to the first static object group based on corresponding areas of the first traffic identifications in the image plane;
the second point cloud determining module is used for acquiring the world coordinates of each second traffic identification in the second static object group from the region of interest of the map used by the vehicle based on the current pose information of the vehicle; determining image plane point clouds of at least one map corresponding to the second static object group based on the world coordinates of part or all of the second traffic identifications;
the matching module is used for performing element matching on each first traffic identifier in the first static object group and each second traffic identifier in the second static object group based on the image plane point cloud of the image and the image plane point cloud of each map;
the attribute information acquisition module is used for taking each first traffic identification as a target traffic identification if each first traffic identification in the first static object group is successfully matched with each second traffic identification in the second static object group, acquiring attribute information of each target traffic identification from the attribute information of each traffic identification pre-stored in a map, and acquiring color information of each target traffic identification according to the current image;
the second point cloud determining module is configured to determine, based on the world coordinates of the second traffic identifiers, the current pose information of the vehicle, and the preset parameter, corresponding coordinate points of the two-dimensional image plane after the second traffic identifiers in the second static object group are projected onto the two-dimensional image plane by using the following formula:
Figure 862985DEST_PATH_IMAGE006
wherein ds is the scale; x, y and z are respectively an abscissa, an ordinate and a vertical coordinate in world coordinates of the second traffic sign; u and v are corresponding coordinate points of the second traffic sign projected to the two-dimensional image plane and then on the two-dimensional image plane;
Figure 238602DEST_PATH_IMAGE007
current pose information of the vehicle;
Figure 862482DEST_PATH_IMAGE008
for preset parameters, it is expressed that: calibrating parameters from the origin of the current vehicle positioning coordinate system to the camera;
determining combinations to be matched according to k types of second traffic identifications in the second static object group to obtain f combinations to be matched; wherein the content of the first and second substances,
Figure 272735DEST_PATH_IMAGE009
the first static object group comprises k types of first traffic signs, and the number of the k types of first traffic signs is respectively as follows: { P1,P2,……,Pk}; the second static object group comprises k types of second traffic signs, and the number of the k types of second traffic signs is respectively as follows: { Q1,Q2,……,Qk}; and the number is PiThe first traffic sign and the quantity of QiThe second traffic sign is a traffic sign of the same type,
Figure 651763DEST_PATH_IMAGE010
(ii) a For each combination to be matched, taking corresponding coordinate points of each second traffic identification in the combination to be matched on the two-dimensional image plane as image plane point clouds of the combination to be matched together; and determining each image plane point cloud to be matched and combined as the image plane point cloud of the map corresponding to the second static object group.
7. The apparatus of claim 6, wherein the attribute information obtaining module is further configured to obtain the attribute information and the color information of each first traffic identifier according to the current image if each first traffic identifier in the first static object group is unsuccessfully matched with each second traffic identifier in the second static object group.
8. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
9. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.
CN202110445372.5A 2021-04-25 2021-04-25 Traffic identification detection method and device, electronic equipment and storage medium Active CN112861832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110445372.5A CN112861832B (en) 2021-04-25 2021-04-25 Traffic identification detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110445372.5A CN112861832B (en) 2021-04-25 2021-04-25 Traffic identification detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112861832A CN112861832A (en) 2021-05-28
CN112861832B true CN112861832B (en) 2021-07-20

Family

ID=75992828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110445372.5A Active CN112861832B (en) 2021-04-25 2021-04-25 Traffic identification detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112861832B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546755A (en) * 2021-06-30 2022-12-30 上海商汤临港智能科技有限公司 Traffic identification recognition method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111121805A (en) * 2019-12-11 2020-05-08 广州赛特智能科技有限公司 Local positioning correction method, device and medium based on road traffic marking marks
CN111597986B (en) * 2020-05-15 2023-09-29 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for generating information
CN111882612B (en) * 2020-07-21 2024-03-08 武汉理工大学 Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN112180923A (en) * 2020-09-23 2021-01-05 深圳裹动智驾科技有限公司 Automatic driving method, intelligent control equipment and automatic driving vehicle
CN112200868A (en) * 2020-09-30 2021-01-08 深兰人工智能(深圳)有限公司 Positioning method and device and vehicle
CN112419505B (en) * 2020-12-07 2023-11-10 苏州工业园区测绘地理信息有限公司 Automatic extraction method for vehicle-mounted point cloud road shaft by combining semantic rules and model matching
CN112580489A (en) * 2020-12-15 2021-03-30 深兰人工智能(深圳)有限公司 Traffic light detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112861832A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN110136182B (en) Registration method, device, equipment and medium for laser point cloud and 2D image
EP4016457A1 (en) Positioning method and apparatus
CN107305632B (en) Monocular computer vision technology-based target object distance measuring method and system
CN110135396B (en) Ground mark identification method, device, equipment and medium
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
US20210333124A1 (en) Method and system for detecting changes in road-layout information
CN113907663B (en) Obstacle map construction method, cleaning robot, and storage medium
CN112257692A (en) Pedestrian target detection method, electronic device and storage medium
CN113554643B (en) Target detection method and device, electronic equipment and storage medium
CN110657812A (en) Vehicle positioning method and device and vehicle
US20230108621A1 (en) Method and system for generating visual feature map
CN112861832B (en) Traffic identification detection method and device, electronic equipment and storage medium
CN112036359B (en) Method for obtaining topological information of lane line, electronic device and storage medium
CN108344997B (en) Road guardrail rapid detection method based on trace point characteristics
CN112132892A (en) Target position marking method, device and equipment
CN113793413A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
US20210042536A1 (en) Image processing device and image processing method
CN117215327A (en) Unmanned aerial vehicle-based highway inspection detection and intelligent flight control method
CN117079238A (en) Road edge detection method, device, equipment and storage medium
CN114897987B (en) Method, device, equipment and medium for determining vehicle ground projection
CN116721396A (en) Lane line detection method, device and storage medium
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
WO2022247299A1 (en) Indicator lamp state recognition
CN115311634A (en) Lane line tracking method, medium and equipment based on template matching
CN114037977A (en) Road vanishing point detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220321

Address after: 430051 No. b1336, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: 430056 building B (qdxx-f7b), No.7 building, qiedixiexin science and Technology Innovation Park, South taizihu innovation Valley, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee before: HUBEI ECARX TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right