CN114970790A - Traffic sign, manufacturing method thereof and vehicle pose estimation method - Google Patents

Traffic sign, manufacturing method thereof and vehicle pose estimation method Download PDF

Info

Publication number
CN114970790A
CN114970790A CN202210550800.5A CN202210550800A CN114970790A CN 114970790 A CN114970790 A CN 114970790A CN 202210550800 A CN202210550800 A CN 202210550800A CN 114970790 A CN114970790 A CN 114970790A
Authority
CN
China
Prior art keywords
traffic
dimensional code
coordinate system
dimensional
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210550800.5A
Other languages
Chinese (zh)
Inventor
楼喜中
杨超
秦成孝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Weiying Technology Co ltd
China Jiliang University
Original Assignee
Hangzhou Weiying Technology Co ltd
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Weiying Technology Co ltd, China Jiliang University filed Critical Hangzhou Weiying Technology Co ltd
Priority to CN202210550800.5A priority Critical patent/CN114970790A/en
Publication of CN114970790A publication Critical patent/CN114970790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01FADDITIONAL WORK, SUCH AS EQUIPPING ROADS OR THE CONSTRUCTION OF PLATFORMS, HELICOPTER LANDING STAGES, SIGNS, SNOW FENCES, OR THE LIKE
    • E01F9/00Arrangement of road signs or traffic signals; Arrangements for enforcing caution
    • E01F9/60Upright bodies, e.g. marker posts or bollards; Supports for road signs
    • E01F9/604Upright bodies, e.g. marker posts or bollards; Supports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • G06K19/06103Constructional details the marking being embedded in a human recognizable image, e.g. a company logo with an embedded two-dimensional code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • G06K19/06131Constructional details the marking comprising a target pattern, e.g. for indicating the center of the bar code or for helping a bar code reader to properly orient the scanner or to retrieve the bar code inside of an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a traffic sign, a manufacturing method thereof and a vehicle pose estimation method, wherein the manufacturing method of the traffic sign comprises the following steps: carrying out BGR channel decomposition on the traffic identification picture to be processed to obtain a gray value B of a channel B; eliminating a channel B of the traffic identification picture to be processed to obtain a preprocessed traffic identification picture; generating a two-dimensional Code with a gray value of b and the same size as the preprocessed traffic identification picture by adopting a QR Code open source library, wherein the coding information of the two-dimensional Code is position information corresponding to the traffic identification picture or a unique UID in a database table, and the UID of the two-dimensional Code and the position information corresponding to the UID are prestored in the database; fusing the generated two-dimensional code with the preprocessed traffic identification picture to obtain a traffic identification picture with the two-dimensional code; and engraving the traffic sign picture with the two-dimensional code on the reflective film to obtain the traffic sign. The traffic identification can be used for auxiliary positioning of vehicles and can also be used for indoor and outdoor attitude estimation of the vehicles.

Description

Traffic sign, manufacturing method thereof and vehicle pose estimation method
Technical Field
The invention relates to the technical field of automatic driving, in particular to a traffic sign, a manufacturing method thereof and a vehicle pose estimation method.
Background
The automatic driving technology is used as an important leading direction of future automobile development, integrates the modern sensing technology, the communication technology, the automatic control technology and the artificial intelligence technology, can effectively reduce traffic accidents caused by negligence of drivers, and has huge potential in the aspects of relieving traffic jam, improving traffic efficiency, reducing energy consumption and the like.
At present, traffic sign recognition is mainly deep learning target detection, and the traditional target detection algorithm faces some disadvantages in live-action tests, such as: the method is easy to be limited by factors such as light, angle, barrier shielding, driving speed and the like, and is difficult to realize multi-target detection, easy to miss detection and slow to identify. Furthermore, training the algorithm requires a large amount of traffic sign data and takes a lot of time and resources.
The current GPS positioning is insensitive in height, and in complex scenes such as overpasses and viaducts, the GPS positioning often generates deviation, which causes positioning errors.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a method for manufacturing a traffic sign, which adopts the technical scheme that the method comprises the following steps:
step 1, carrying out BGR channel decomposition on a traffic identification picture to be processed, and acquiring a gray value B of a channel B of the traffic identification picture to be processed;
step 2, eliminating a channel B of the traffic identification picture to be processed to obtain a preprocessed traffic identification picture;
step 3, generating a two-dimensional Code with a gray value of b and the same size as the preprocessed traffic identification picture by adopting a QR Code open source library, wherein the coding information of the two-dimensional Code is position information corresponding to the traffic identification picture or unique UID in a database table, the UID of the two-dimensional Code and the position information corresponding to the UID are prestored in the database, the position information comprises one or more of road information, longitude, dimensionality, ground height and size of the two-dimensional Code, and the size of the two-dimensional Code comprises the side length l of the two-dimensional Code and the distance l between the central points of two adjacent position detection graphs 1
Step 4, fusing the two-dimensional code generated in the step 3 with the preprocessed traffic identification picture to obtain a traffic identification picture with the two-dimensional code;
and 5, engraving the traffic sign picture with the two-dimensional code on the reflective film to obtain the traffic sign.
The application also provides a traffic sign prepared by the method.
The application also provides a vehicle pose estimation method based on the traffic sign, which is characterized by comprising the following steps of:
step 1, after an automatic driving system senses a traffic sign, a vision camera acquires an image of the current traffic sign, and channel decomposition is carried out on the current traffic sign to acquire a two-dimensional code image of a channel B on the current traffic sign;
step 2, decoding the two-dimensional code image obtained in the step 1 to obtain position information corresponding to the two-dimensional code;
step 3, defining at least four characteristic points of the two-dimensional code, acquiring pixel coordinates of a plurality of characteristic points of the two-dimensional code in a two-dimensional code image, and acquiring world coordinates of the four characteristic points of the two-dimensional code in a world coordinate system according to position information corresponding to the two-dimensional code;
step 4, calculating a pose transformation matrix converted from a world coordinate system to a camera coordinate system by adopting an EPnP algorithm according to the world coordinates and the pixel coordinates of the feature points of the two-dimensional code image, wherein the transformation matrix is an external reference matrix T of the visual camera under the nth image frame n And obtaining the estimated pose of the vehicle.
Preferably, in the step 2, the automatic driving system decodes the location information to obtain the UID and uploads the UID to the traffic service center, the traffic service center accesses the database, inquires the location information prestored in the database by the UID and sends the location information to the automatic driving system, and the automatic driving system obtains the location information corresponding to the two-dimensional code.
The application further provides a vehicle pose estimation method based on the traffic sign, which comprises the following steps:
step 1, constructing a pose conversion relation between every two traffic two-dimensional codes in all the traffic two-dimensional codes, selecting one traffic two-dimensional code as a reference two-dimensional code, constructing pose conversion relations between the reference two-dimensional code and the other traffic two-dimensional codes, establishing a reference world coordinate system by taking the center of the reference two-dimensional code as an original point, defining at least four feature points of the two-dimensional code, calculating reference world coordinates of the feature points of all the traffic two-dimensional codes, and constructing a two-dimensional code map, wherein map information of the two-dimensional code map comprises all the traffic two-dimensional codes and the reference world coordinates of the feature points thereof;
step 2, after the automatic driving system senses the traffic identification, the vision camera collects the image frame of the current traffic identification, channel decomposition is carried out on the image frame of the current traffic identification to obtain a two-dimensional code image on a channel B, and the two-dimensional code image is detected to obtain a plurality of detected two-dimensional codes;
step 3, recognizing all the detected two-dimensional codes, acquiring pixel coordinates of the feature points of the successfully recognized detected two-dimensional codes from the image frames of the current traffic identification, and acquiring reference world coordinates of the feature points of the successfully recognized detected two-dimensional codes from a two-dimensional code map;
step 4, calculating a transformation matrix of the feature points of the successfully identified detection two-dimensional code, which is converted from a world coordinate system into a camera coordinate system, by using an EPnP algorithm according to the reference world coordinates and the pixel coordinates of the feature points of the successfully identified detection two-dimensional code, wherein the transformation matrix is an external parameter matrix of the visual camera under the image frame;
step 5, calculating the coordinates of the vision camera in the reference world coordinate system
Figure BDA0003650612920000041
The coordinate calculation formula of the vision camera in the reference world coordinate system is as follows:
Figure BDA0003650612920000042
wherein the content of the first and second substances,
t is a transformation matrix for converting the characteristic points in the image frame from a reference world coordinate system to a camera coordinate system, and T is [ R T ];
r is a rotation matrix for converting the characteristic points from the world coordinate system to the camera coordinate system;
t is a translation vector of the feature point converted from the world coordinate system to the camera coordinate system;
Figure BDA0003650612920000043
coordinates of the optical center of the camera in a camera coordinate system;
Figure BDA0003650612920000044
is the coordinates of the optical center of the camera in a reference world coordinate system.
Preferably, the method for constructing the pose transformation relations among all the traffic two-dimensional codes in the step 1 includes: establishing a graph and optimizing:
step 1, a visual camera collects key frames of all traffic two-dimensional codes, the key frames are image frames with at least two traffic two-dimensional codes, and transformation matrixes of a visual camera coordinate system and all traffic two-dimensional codes in the key frames are obtained from the key frames, so that transformation matrixes between all two traffic two-dimensional codes in the key frames are obtained;
and 2, traversing all the key frames to obtain a transformation matrix set between every two traffic two-dimensional codes in all the traffic two-dimensional codes, thereby obtaining a pose transformation relation between every two traffic two-dimensional codes in all the traffic two-dimensional codes.
Preferably, the method for calculating the reference world coordinates of the feature points of all the traffic two-dimensional codes in the reference world coordinate system according to the pose transformation relations between the reference two-dimensional code and all the other traffic two-dimensional codes comprises the following steps:
to pair
Figure BDA0003650612920000051
Performing linear optimization to obtain the optimum
Figure BDA0003650612920000052
And
Figure BDA0003650612920000053
according to the optimum
Figure BDA0003650612920000054
And
Figure BDA0003650612920000055
and establishing a reference world coordinate relationship between the characteristic points of the reference two-dimensional code and the characteristic points of the other traffic two-dimensional codes, and calculating the reference world coordinates of the characteristic points of the other traffic two-dimensional codes according to the reference world coordinates of the characteristic points of the reference two-dimensional code.
Wherein the content of the first and second substances,
T i a transformation matrix representing the ith two-dimensional code coordinate system to the reference world coordinate system;
T n a transformation matrix (external reference matrix of the camera of the nth frame) representing the reference world coordinate system to the camera coordinate system in the nth key frame;
k represents an internal reference matrix of the camera;
n represents a key frame number;
ζ=[r t] T e.g. SE (3), wherein SE (3) is a lie algebra corresponding to the special Euclidean group SE (3);
r=[r x ,r y ,r z ] T is a unit rotation vector, t is a translation vector;
Figure BDA0003650612920000061
c j j-th characteristic point P representing ith traffic two-dimensional code j Establishing three-dimensional homogeneous coordinates under a coordinate system by taking the center of the coordinate system as an origin;
Figure BDA0003650612920000062
and the two-dimensional pixel coordinates of the jth characteristic point of the ith traffic two-dimensional code in the nth key frame are represented.
Preferably, in the step 1, the traffic identification two-dimensional code map is uploaded to a cloud, and an automatic driving system of the vehicle can communicate with the cloud to download the traffic identification two-dimensional code map.
The road traffic sign board is visible everywhere on an urban road, a two-dimensional code is fused into the road traffic sign board, the code in the two-dimensional code is a unique UID, information contained in each UID comprises but is not limited to information represented by a traffic sign and accurate coordinates of the location of the traffic sign, unmanned automobiles can obtain the information of the traffic sign in real time, accurate coordinates for assisting positioning are provided for the automobiles, and meanwhile, the identification of people on the traffic sign is not influenced;
through the traffic identification, the two-dimensional code on the traffic identification can be sensed and identified through the automatic driving system, so that the vehicle pose is estimated according to the pixel coordinates and the world coordinates of the feature points on the two-dimensional code;
the two-dimension code map can be established by utilizing the two-dimension codes on all the traffic marks through the traffic marks, and the automatic driving system can calculate the current posture of the camera according to the sensed two-dimension code image, so that the posture of the driving vehicle is obtained, and the posture estimation of the indoor vehicle is realized.
Drawings
Fig. 1 is a schematic flow chart of a method for manufacturing a traffic sign according to embodiment 1;
fig. 2 is a schematic diagram of the traffic identification picture to be processed after BGR channel decomposition in embodiment 1;
fig. 3 is a schematic diagram of the embodiment 1 after eliminating the B channel of the traffic sign picture to be processed;
FIG. 4 is a schematic diagram of the two-dimensional code with the gray scale value b in example 1;
fig. 5 is a schematic view of a traffic sign picture with two-dimensional codes in embodiment 1;
FIG. 6 is a schematic view of the traffic sign in embodiment 1;
FIG. 7 is a schematic diagram of the side length of the two-dimensional code and the distance between the center points of two adjacent position detection patterns of the two-dimensional code in embodiment 2;
fig. 8 is a schematic diagram of pixel coordinates of feature points of a two-dimensional code in embodiment 2;
fig. 9 is a schematic diagram illustrating a principle of solving a pose of a camera by using a two-dimensional code in embodiment 2;
FIG. 10 is a diagram showing key frames captured by the visual camera according to embodiment 3;
FIG. 11 is a schematic diagram of the reprojection error described in example 3;
fig. 12 is a two-dimensional code distribution map and a camera keyframe track reconstructed in embodiment 3;
FIG. 13 is a schematic view of an image frame obtained after the autopilot system senses a traffic sign;
fig. 14 is a schematic diagram of displaying the real-time pose of the visual camera in the two-dimensional code map.
Detailed Description
Example 1
Referring to fig. 1, an embodiment 1 of the present application is provided, and the embodiment provides a method for manufacturing a traffic sign, including the following steps:
step 1, performing BGR channel decomposition on a traffic identification picture to be processed, wherein FIG. 2 is a schematic diagram of the decomposed traffic identification picture to be processed, and acquiring a gray value B of a channel B of the traffic identification picture to be processed;
step 2, eliminating a B channel of the traffic identification picture to be processed, and FIG. 3 is a schematic diagram after eliminating the B channel of the traffic identification picture to be processed; obtaining a preprocessed traffic identification picture;
step 3, generating a two-dimensional Code with a gray value of b by using a QR Code open source library, as shown in fig. 4, wherein the coding information of the two-dimensional Code is position information corresponding to the traffic identification picture or a unique UID in a database table, the UID and the corresponding position information are prestored in the database, the position information comprises one or more of road information, longitude, latitude, height and the size of the two-dimensional Code, and the size of the two-dimensional Code comprises the side length l of the two-dimensional Code and the distance l between the central points of two adjacent position detection pictures 1
Step 4, as shown in fig. 5, fusing the two-dimensional code generated in the step 3 with the preprocessed traffic identification to obtain a traffic identification picture with the two-dimensional code;
and 5, engraving the traffic sign picture with the two-dimensional code obtained in the step 4 on a reflective film to obtain the traffic sign as shown in fig. 6.
In the step 3, after the QR Code open source library is adopted to generate the two-dimensional Code with the gray value of b, the two-dimensional Code is zoomed according to the size of the preprocessed traffic identification picture, and the two-dimensional Code with the gray value of b and the size consistent with that of the preprocessed traffic identification picture is obtained.
The traffic sign two-dimensional code enables an automatic driving system of an automobile to obtain information of a traffic sign in real time and provide accurate coordinates for auxiliary positioning by fusing the two-dimensional code with a common traffic sign board in an urban road.
The traffic sign two-dimensional code is the same as a common road traffic sign in size, and the specific sizes of the traffic sign two-dimensional code comprise 600mm x 600mm, 800mm x 800mm, 1000mm x 1000mm, 2400 x 1200mm, 3000 x 2000mm and 4000 x 2000 mm. And obtaining the traffic sign two-dimensional code through the steps, and directly making a map on the reflective film by using a precise engraving machine.
In the step 3, the automatic driving system decodes to obtain the UID, the UID is uploaded to a traffic service center, the traffic service center accesses a database, the position information prestored in the UID in the database is inquired and sent to the automatic driving system, and the automatic driving system obtains the position information;
in the step 3, the position information may also be directly encoded in the two-dimensional code, and the encoded information in the two-dimensional code shown in fig. 6 is the display information _ longitude _ latitude _ height, specifically, the direct scholar road _ left-turn highway _ right-turn yellow road _120.372069568_30.310328745 _ 5.5128.
Example 2
The embodiment provides a vehicle positioning method based on traffic identification on the basis of the embodiment 1, which comprises the following steps:
step 1, after an automatic driving system senses a traffic sign, a vision camera acquires an image of the current traffic sign, and channel decomposition is carried out on the current traffic sign to acquire a two-dimensional code image of a channel B on the current traffic sign;
step 2, decoding the two-dimensional code image obtained in the step 1 to obtain corresponding position information, wherein the position information comprises road information, longitude, latitude, ground height and two-dimensional code size, and the two-dimensional code size comprises the side length l of the two-dimensional code and the distance l between the central points of two adjacent position detection graphs 1 Reference is made to the attached drawingsFIG. 7 is a schematic diagram showing a side length of a two-dimensional code and a distance between center points of two position detection patterns adjacent to the two-dimensional code;
step 3, referring to fig. 8, defining three position detection graph center points and a fourth corner point P of the two-dimensional code 04 Acquiring pixel coordinates of four characteristic points of the two-dimensional code in the two-dimensional code image as the characteristic points; the pixel coordinates of 4 corner points of the two-dimensional code are P 01 (u 01 ,v 01 )、P 02 (u 02 ,v 02 )、 P 03 (u 03 ,v 03 ) And P 04 (u 04 ,v 04 ) And the coordinates of the corner points of the 3 position detection patterns are marked as P ij =(u ij ,v ij ) I is 1, 2, 3, j is 1, 2, 3, 4, i.e. the formula of the pixel coordinates of the four feature points of the two-dimensional code is:
Figure BDA0003650612920000101
calculating world coordinates of four characteristic points of the two-dimensional code in a world coordinate system according to the size, longitude, latitude and ground distance height of the two-dimensional code in the position information corresponding to the two-dimensional code;
and 5, establishing point pairs of pixel coordinates and world coordinates of four feature points of the two-dimensional code, calculating a pose transformation matrix between a camera coordinate system and the two-dimensional code coordinate system by adopting an EPnP algorithm, wherein the pose transformation matrix is an external reference matrix of the visual camera under the nth image frame, and referring to the attached figure 9, the principle schematic diagram of solving the pose of the camera by using the two-dimensional code is shown.
In the step 2, the automatic driving system decodes to obtain the UID and uploads the UID to the traffic service center, the traffic service center accesses the database, location information prestored in the UID in the database is inquired and sent to the automatic driving system, and the automatic driving system obtains the location information.
Example 3
In this embodiment, a vehicle pose estimation method is provided, including the following steps:
step 1, constructing a pose conversion relation between every two traffic two-dimensional codes in all the traffic two-dimensional codes, selecting one traffic two-dimensional code as a reference two-dimensional code, constructing pose conversion relations between the reference two-dimensional code and the other traffic two-dimensional codes, establishing a reference world coordinate system by taking the center of the reference two-dimensional code as an original point, defining at least four characteristic points of the traffic two-dimensional code, calculating reference world coordinates of the characteristic points of all the traffic two-dimensional codes to construct a two-dimensional code map, wherein the map information of the two-dimensional code map comprises all the traffic two-dimensional codes and the reference world coordinates of the characteristic points thereof;
step 2, after the automatic driving system senses the traffic identification, the vision camera collects the image frame of the current traffic identification, channel decomposition is carried out on the image frame of the current traffic identification to obtain a two-dimensional code image on a channel B, and the two-dimensional code image is detected to obtain a plurality of detected two-dimensional codes;
step 3, recognizing all the detection two-dimensional codes, acquiring pixel coordinates of feature points of the successfully recognized detection two-dimensional codes from the image frames of the current traffic identification, and acquiring reference world coordinates of the feature points of the successfully recognized detection two-dimensional codes from a two-dimensional code map;
and 4, calculating a transformation matrix of the feature points of the successfully identified detection two-dimensional code, which is converted from a world coordinate system to a camera coordinate system, by adopting an EPnP algorithm according to the reference world coordinates and the pixel coordinates of the feature points of the successfully identified detection two-dimensional code, wherein the transformation matrix is an external reference matrix of the visual camera under the image frame, and the estimated pose of the vehicle is obtained.
Step 5, calculating the coordinates of the visual camera in the reference world coordinate system
Figure BDA0003650612920000111
The coordinate calculation formula of the vision camera in the reference world coordinate system is as follows:
Figure BDA0003650612920000112
wherein the content of the first and second substances,
t is a transformation matrix for converting the characteristic points under the image frame from a reference world coordinate system to a camera coordinate system, and T is [ R T ];
r is a rotation matrix for converting the characteristic points from a reference world coordinate system to a camera coordinate system;
t is a translation vector of the characteristic point converted from the reference world coordinate system to the camera coordinate system;
Figure BDA0003650612920000121
coordinates of the optical center of the camera in a camera coordinate system;
Figure BDA0003650612920000122
is the coordinate of the optical center of the camera in the reference world coordinate system.
All traffic two-dimensional codes are recorded as M, all the traffic two-dimensional codes are sequentially numbered as i, i belongs to [1, M ], a schematic diagram of a key frame captured by a visual camera is shown in the attached drawing 10, the number 1 and the number 2 two-dimensional codes are simultaneously detected in the first key frame captured by the visual camera, the number 1 two-dimensional code, the number 2 two-dimensional code and the number 3 two-dimensional code are simultaneously detected in the second key frame, and the number 3 and the number 4 two-dimensional code are simultaneously detected in the third key frame.
A transformation matrix of the visual camera coordinate system and the No. 1 two-dimensional code coordinate system can be calculated in the second key frame
Figure BDA0003650612920000123
Transformation matrix of vision camera coordinate system and No. 2 two-dimensional code coordinate system
Figure BDA0003650612920000124
And transformation matrix of visual camera coordinate system and No. 3 two-dimensional code coordinate system
Figure BDA0003650612920000125
From this f can be obtained 2 Transformation matrix of middle number 1 two-dimensional code and number 2 two-dimensional code
Figure BDA0003650612920000126
Transformation matrix of No. 1 two-dimensional code and No. 3 two-dimensional code
Figure BDA0003650612920000127
And transformation matrix of two-dimensional code No. 2 and two-dimensional code No. 3
Figure BDA0003650612920000128
Figure BDA0003650612920000129
The calculation method between every two traffic two-dimensional codes in all the traffic two-dimensional codes is as follows:
Figure BDA00036506129200001210
wherein ξ n Denotes f n Set of pose relationships between all traffic two-dimensional codes and camera coordinate system, psi n Denotes f n And (4) a transformation matrix set between every two traffic two-dimensional codes in all the traffic two-dimensional codes.
In the same way, f can be obtained n The set psi of transformation matrices between all traffic two-dimensional codes observed in n
Figure BDA0003650612920000131
After a midpoint of a certain traffic two-dimensional code is designated as an origin to establish a reference world coordinate system, a two-dimensional code map with the same world coordinate system can be obtained.
In consideration of errors in the camera observation data, the present embodiment optimally solves the pose distribution of the two-dimensional code map by using the principle of minimizing the reprojection errors, and finally obtains the reference world coordinates of the feature points of all the two-dimensional code labels in the reference coordinate system, and fig. 11 is a schematic diagram of the principle of the reprojection errors.
In practical situations, due to the influence of factors such as poor illumination, too fast movement, insufficient resolution, lens distortion and the like, the observation result has noise, so that a great accumulated error is generated, and at the moment, the error accumulation needs to be reduced by using a nonlinear optimization method, so that the drift of the positioning result is avoided.
According to a re-projection principle of a pinhole model, a least square optimization problem is constructed to adjust an estimated value, and a minimum error term of the ith traffic two-dimensional code in the nth key frame is defined as follows:
Figure BDA0003650612920000132
the variables in the formula are defined as follows, T i ,T n And K is the quantity to be optimized.
K represents an internal reference matrix of the camera;
T n a transformation matrix representing the reference world coordinate system to the visual camera coordinate system in the nth key frame (i.e., the extrinsic reference matrix of the visual camera in the nth key frame).
T i And a transformation matrix of a two-dimensional code coordinate system of the ith traffic two-dimensional code to a reference world coordinate system.
c j The j (th) characteristic point P of the i (th) traffic two-dimensional code is represented as a known quantity j In the three-dimensional homogeneous coordinate of the two-dimensional code coordinate system, in this embodiment, the three-dimensional homogeneous coordinate of the four feature points in the two-dimensional code coordinate system is:
Figure BDA0003650612920000141
Figure BDA0003650612920000142
)。
(T i ·c j ) Means to convert c j Transforming into reference world coordinate system to obtain
Figure BDA0003650612920000143
Ψ(K,T n ,T i ·c j ) Show that
Figure BDA0003650612920000144
Passing through camera external parameter T according to camera pinhole model n Traffic two-dimensional code reprojected after calculation of internal reference KPixel coordinates of the feature points of (1).
Figure BDA0003650612920000145
The observation pixel coordinate of the jth characteristic point of the ith traffic two-dimensional code in the nth key frame is expressed as an observation quantity, and observation noise exists.
Figure BDA0003650612920000146
Representing the reprojection coordinates and the observation pixel coordinates of the characteristic points of the ith traffic two-dimensional code in n key frames
Figure BDA0003650612920000147
The euclidean distance between them, i.e. the reprojection error.
The total projection error of all keyframes can be obtained by the following formula:
Figure BDA0003650612920000148
wherein M is the total number of the traffic two-dimensional codes, and N is the total number of the key frames. Taking into account the transformation matrix T n ,T i There is a constraint relationship inside, and in order to reduce the search space (unknown quantity number), the transformation matrix is converted into lie algebraic form.
Assume a point P in an arbitrary reference system a in space a =[x,y,z] T Rotation and then translation may translate the point into another coordinate system b. Defining the transformation as follows:
Figure BDA0003650612920000149
wherein r ═ r x ,r y ,r z ]Is a unit rotation vector, t is a translation vector, zeta belongs to se (3), and se (3) is a lie algebra corresponding to the special Euclidean group. From the formula of rodriegers:
Figure BDA0003650612920000151
wherein
Figure BDA0003650612920000152
Is an antisymmetric matrix of r, and theta is a rotation angle corresponding to r. The transformation matrix T is then as follows:
Figure BDA0003650612920000153
the following formula can be obtained from the formulae (3), (4), (5) and (6)
Figure BDA0003650612920000154
The non-linear optimization is carried out on the formula, and the optimal one can be obtained
Figure BDA0003650612920000155
The present embodiment uses Levenberg-marquardt (LM) algorithm to solve the least square problem, which is improved based on gauss-newton method, and considers that the approximation only works in a certain range, and if the approximation effect is not good, the range is narrowed, so that the positive nature of the incremental equation can be ensured. In order to avoid iteration failure, the method needs to acquire an initial value in advance, an initial value of internal parameter K of the camera can be acquired by a stress-friend calibration method, T n The initial value can be obtained by the EPnP method, T i The initial value can be obtained through the established two-dimensional code map.
Can be optimized
Figure BDA0003650612920000156
And parameters are established according to the parameters, coordinates of all traffic two-dimensional codes under a reference world coordinate system can be calculated, a two-dimensional code map can be established according to the coordinates of the characteristic points of each traffic two-dimensional code under the reference world coordinate system, and meanwhile, a camera keyframe track and a distribution diagram of the traffic two-dimensional codes can also be obtained, and the result is shown in fig. 12.
The specific method for constructing the map comprises the following steps:
step 1, a visual camera collects key frames of all traffic two-dimensional codes, the key frames are image frames with at least two traffic two-dimensional codes, and transformation matrixes of a visual camera coordinate system and all traffic two-dimensional codes in the key frames are obtained from the key frames, so that transformation matrixes between all two traffic two-dimensional codes in the key frames are obtained;
and 2, traversing all the key frames to obtain a transformation matrix set between every two traffic two-dimensional codes in all the traffic two-dimensional codes, thereby obtaining a pose transformation relation between every two traffic two-dimensional codes in all the traffic two-dimensional codes.
Before a two-dimensional map is constructed, the middle point of any one traffic two-dimensional code is appointed to be used as the origin of a reference world coordinate system, the pose conversion relation of the other traffic two-dimensional codes relative to the reference two-dimensional code coordinate system is obtained from a key frame captured by a visual camera, the key frame refers to image frames of at least 2 traffic two-dimensional codes detected and identified, the reference world coordinates of all traffic two-dimensional code feature points in the reference world coordinate system are obtained through calculation according to the pose conversion relation between the reference two-dimensional code coordinate system and the other traffic two-dimensional codes, and the finally obtained result is the traffic two-dimensional code map in the same reference world coordinate system, wherein the two-dimensional code map comprises the pose conversion relation between the reference two-dimensional code coordinate system and the other traffic two-dimensional codes and the reference world coordinates of all traffic two-dimensional code feature points.
When the map is constructed, an auxiliary traffic mark can be designed and added to construct the map, so that the visual camera can conveniently acquire key frames, and after the two-dimensional code map is constructed, the auxiliary traffic mark is removed, and the actually required traffic mark is reserved.
After the two-dimension code map is constructed, the two-dimension code map can be used for assisting in large-scale positioning, and the map comprises a transformation matrix of each two-dimension code coordinate system and a reference world coordinate system and reference world coordinates of four feature points of each traffic two-dimension code in the reference world coordinate system. When the vision camera captures any one traffic two-dimensional code or a plurality of traffic two-dimensional codes, the pose of the vision camera under the reference world coordinate system can be solved.
Referring to fig. 13, which is a schematic diagram of an image frame obtained after the automatic driving system senses a traffic identifier, when the 9 # two-dimensional code is designated as the reference two-dimensional code, the reference world coordinate system is established by using the middle point of the 9 # two-dimensional code as the origin, and after the construction of the two-dimensional code map is completed, the pose relationship psi of the 8 # two-dimensional code, the 10 # two-dimensional code, the 11 # two-dimensional code and the 9 # two-dimensional code can be obtained 8,9 ,T 10,9 ,T 11,9 And simultaneously obtaining reference world coordinates of all characteristic points of all traffic two-dimensional codes in the map
Figure BDA0003650612920000171
Wherein i represents the ith traffic two-dimensional code, and j represents the jth characteristic point of the ith traffic two-dimensional code.
When the vision camera detects the two-dimension code 9 and the two-dimension code 10, the vision camera obtains the two-dimension code image of the current traffic identification, obtains 8 pixel coordinates of four characteristic points of the two-dimension code 9 and four characteristic points of the two-dimension code 10 from the two-dimension code image, obtains 8 reference world coordinates of the four characteristic points of the two-dimension code 9 and the four characteristic points of the two-dimension code 10 from the two-dimension code map, namely obtains eight point pairs of the pixel coordinates and the reference world coordinates of the eight characteristic points, and obtains an external parameter matrix T of the vision camera under the current key frame by an EPnP method and an LM method n
The camera has a pose of T n =[R t]Coordinates of a vision camera in a three-dimensional world coordinate system
Figure BDA0003650612920000172
The formula for calculation is as follows:
Figure BDA0003650612920000173
the effect of positioning by using the constructed two-dimensional code map is shown in fig. 14, the real-time pose display of the visual camera in the two-dimensional code map is shown, wherein the small box is the pose of the traffic two-dimensional code, and the large box is the pose of the camera in the current image frame.
The result shows that the current pose of the camera can be obtained in real time by using the map established by the two-dimensional code, so that the pose of the driving vehicle is obtained, and the indoor positioning function is realized.

Claims (8)

1. The method for manufacturing the traffic sign is characterized by comprising the following steps of:
step 1, carrying out BGR channel decomposition on a traffic identification picture to be processed, and acquiring a gray value B of a channel B of the traffic identification picture to be processed;
step 2, eliminating a channel B of the traffic identification picture to be processed to obtain a preprocessed traffic identification picture;
step 3, generating a two-dimensional Code with a gray value of b and the same size as the preprocessed traffic identification picture by adopting a QR Code open source library, wherein the coding information of the two-dimensional Code is position information corresponding to the traffic identification picture or unique UID in a database table, the UID of the two-dimensional Code and the position information corresponding to the UID are prestored in the database, the position information comprises one or more of road information, longitude, dimensionality, ground height and size of the two-dimensional Code, and the size of the two-dimensional Code comprises the side length l of the two-dimensional Code and the distance l between the central points of two adjacent position detection graphs 1
Step 4, fusing the two-dimensional code generated in the step 3 with the preprocessed traffic identification picture to obtain a traffic identification picture with the two-dimensional code;
and 5, engraving the traffic sign picture with the two-dimensional code on the reflective film to obtain the traffic sign.
2. A traffic sign, characterized by being manufactured by the manufacturing method as claimed in claim 1.
3. The vehicle pose estimation method based on the traffic sign of claim 2, characterized by comprising the following steps:
step 1, after an automatic driving system senses a traffic sign, a vision camera acquires an image of the current traffic sign, and channel decomposition is carried out on the current traffic sign to acquire a two-dimensional code image of a channel B on the current traffic sign;
step 2, decoding the two-dimensional code image obtained in the step 1 to obtain position information corresponding to the two-dimensional code;
step 3, defining at least four characteristic points of the two-dimensional code, acquiring pixel coordinates of a plurality of characteristic points of the two-dimensional code in a two-dimensional code image, and acquiring world coordinates of the four characteristic points of the two-dimensional code in a world coordinate system according to position information corresponding to the two-dimensional code;
step 4, calculating a pose transformation matrix converted from a world coordinate system to a camera coordinate system by adopting an EPnP algorithm according to the world coordinates and the pixel coordinates of the feature points of the two-dimensional code image, wherein the transformation matrix is an external parameter matrix T of the visual camera under the nth image frame n And obtaining the estimated pose of the vehicle.
4. The method according to claim 3, wherein in step 2, the automatic driving system decodes the location information to obtain the UID and uploads the UID to the traffic service center, the traffic service center accesses the database, queries the location information prestored in the UID in the database and sends the location information to the automatic driving system, and the automatic driving system obtains the location information corresponding to the two-dimensional code.
5. A vehicle pose estimation method based on the traffic sign of claim 2, characterized by comprising the following steps:
step 1, constructing a pose conversion relation between every two traffic two-dimensional codes in all the traffic two-dimensional codes, selecting one traffic two-dimensional code as a reference two-dimensional code, constructing pose conversion relations between the reference two-dimensional code and the other traffic two-dimensional codes, establishing a reference world coordinate system by taking the center of the reference two-dimensional code as an original point, defining at least four feature points of the two-dimensional code, calculating reference world coordinates of the feature points of all the traffic two-dimensional codes, and constructing a two-dimensional code map, wherein map information of the two-dimensional code map comprises all the traffic two-dimensional codes and the reference world coordinates of the feature points thereof;
step 2, after the automatic driving system senses the traffic identification, the vision camera collects the image frame of the current traffic identification, channel decomposition is carried out on the image frame of the current traffic identification to obtain a two-dimensional code image on a channel B, and the two-dimensional code image is detected to obtain a plurality of detected two-dimensional codes;
step 3, recognizing all the detected two-dimensional codes, acquiring pixel coordinates of the feature points of the successfully recognized detected two-dimensional codes from the image frames of the current traffic identification, and acquiring reference world coordinates of the feature points of the successfully recognized detected two-dimensional codes from a two-dimensional code map;
step 4, calculating a transformation matrix of the feature points of the successfully identified detection two-dimensional code, which is converted from a world coordinate system into a camera coordinate system, by using an EPnP algorithm according to the reference world coordinates and the pixel coordinates of the feature points of the successfully identified detection two-dimensional code, wherein the transformation matrix is an external parameter matrix of the visual camera under the image frame;
step 5, calculating the coordinates of the visual camera in the reference world coordinate system
Figure FDA0003650612910000031
The coordinate calculation formula of the vision camera in the reference world coordinate system is as follows:
Figure FDA0003650612910000032
wherein the content of the first and second substances,
t is a transformation matrix for converting the characteristic points in the image frame from a reference world coordinate system to a camera coordinate system, and T is [ R T ];
r is a rotation matrix for converting the characteristic points from the world coordinate system to the camera coordinate system;
t is a translation vector of the feature point converted from the world coordinate system to the camera coordinate system;
Figure FDA0003650612910000033
coordinates of the optical center of the camera in a camera coordinate system;
Figure FDA0003650612910000034
is the coordinates of the optical center of the camera in a reference world coordinate system.
6. The vehicle pose estimation method based on traffic signs according to claim 5, wherein the method for constructing the pose transformation relationship among all traffic two-dimensional codes in the step 1 comprises:
establishing a graph and optimizing:
step 1, a visual camera collects key frames of all traffic two-dimensional codes, the key frames are image frames with at least two traffic two-dimensional codes, and transformation matrixes of a visual camera coordinate system and all traffic two-dimensional codes in the key frames are obtained from the key frames, so that transformation matrixes between all two traffic two-dimensional codes in the key frames are obtained;
and 2, traversing all the key frames to obtain a transformation matrix set between every two traffic two-dimensional codes in all the traffic two-dimensional codes, thereby obtaining a pose transformation relation between every two traffic two-dimensional codes in all the traffic two-dimensional codes.
7. The vehicle pose estimation method based on the traffic sign according to claim 6, wherein the method for calculating the reference world coordinates of the feature points of all the traffic two-dimensional codes in the reference world coordinate system according to the pose transformation relations between the reference two-dimensional codes and all the other traffic two-dimensional codes comprises the following steps:
to pair
Figure FDA0003650612910000041
Performing linear optimization to obtain the optimum
Figure FDA0003650612910000042
And
Figure FDA0003650612910000043
according to the optimum
Figure FDA0003650612910000044
And
Figure FDA0003650612910000045
and establishing a reference world coordinate relationship between the characteristic points of the reference two-dimensional code and the characteristic points of the other traffic two-dimensional codes, and calculating the reference world coordinates of the characteristic points of the other traffic two-dimensional codes according to the reference world coordinates of the characteristic points of the reference two-dimensional code.
Wherein, the first and the second end of the pipe are connected with each other,
T i a transformation matrix representing the ith two-dimensional code coordinate system to the reference world coordinate system;
T n a transformation matrix (external reference matrix of the camera of the nth frame) representing the reference world coordinate system to the camera coordinate system in the nth key frame;
k represents an internal reference matrix of the camera;
n represents a key frame number;
ζ=[r t] T e.g. SE (3), wherein SE (3) is a lie algebra corresponding to the special Euclidean group SE (3);
r=[r x ,r y ,r z ] T is a unit rotation vector, t is a translation vector;
Figure FDA0003650612910000052
c j j-th characteristic point P representing ith traffic two-dimensional code j Establishing three-dimensional homogeneous coordinates under a coordinate system by taking the center of the coordinate system as an origin;
Figure FDA0003650612910000051
and the two-dimensional pixel coordinates of the jth characteristic point of the ith traffic two-dimensional code in the nth key frame are represented.
8. The vehicle pose estimation method based on traffic signs according to claim 5, wherein in the step 1, the traffic sign two-dimensional code map is uploaded to a cloud, and an automatic driving system of the vehicle can communicate with the cloud to download the traffic sign two-dimensional code map.
CN202210550800.5A 2022-05-18 2022-05-18 Traffic sign, manufacturing method thereof and vehicle pose estimation method Pending CN114970790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210550800.5A CN114970790A (en) 2022-05-18 2022-05-18 Traffic sign, manufacturing method thereof and vehicle pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210550800.5A CN114970790A (en) 2022-05-18 2022-05-18 Traffic sign, manufacturing method thereof and vehicle pose estimation method

Publications (1)

Publication Number Publication Date
CN114970790A true CN114970790A (en) 2022-08-30

Family

ID=82986226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210550800.5A Pending CN114970790A (en) 2022-05-18 2022-05-18 Traffic sign, manufacturing method thereof and vehicle pose estimation method

Country Status (1)

Country Link
CN (1) CN114970790A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115936029A (en) * 2022-12-13 2023-04-07 湖南大学无锡智能控制研究院 SLAM positioning method and device based on two-dimensional code
CN117315018A (en) * 2023-08-31 2023-12-29 上海理工大学 User plane pose detection method, equipment and medium based on improved PnP

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115936029A (en) * 2022-12-13 2023-04-07 湖南大学无锡智能控制研究院 SLAM positioning method and device based on two-dimensional code
CN115936029B (en) * 2022-12-13 2024-02-09 湖南大学无锡智能控制研究院 SLAM positioning method and device based on two-dimensional code
CN117315018A (en) * 2023-08-31 2023-12-29 上海理工大学 User plane pose detection method, equipment and medium based on improved PnP
CN117315018B (en) * 2023-08-31 2024-04-26 上海理工大学 User plane pose detection method, equipment and medium based on improved PnP

Similar Documents

Publication Publication Date Title
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN107229690B (en) Dynamic High-accuracy map datum processing system and method based on trackside sensor
CN108196535B (en) Automatic driving system based on reinforcement learning and multi-sensor fusion
CN114970790A (en) Traffic sign, manufacturing method thereof and vehicle pose estimation method
CN111928862A (en) Method for constructing semantic map on line by fusing laser radar and visual sensor
JP4595759B2 (en) Environment recognition device
CN111169468B (en) Automatic parking system and method
CN111080659A (en) Environmental semantic perception method based on visual information
CN110009765A (en) A kind of automatic driving vehicle contextual data system and scene format method for transformation
CN108428254A (en) The construction method and device of three-dimensional map
Pauls et al. Monocular localization in hd maps by combining semantic segmentation and distance transform
CN112508985A (en) SLAM loop detection improvement method based on semantic segmentation
Zhou et al. Developing and testing robust autonomy: The university of sydney campus data set
CN116518984B (en) Vehicle road co-location system and method for underground coal mine auxiliary transportation robot
CN112861748A (en) Traffic light detection system and method in automatic driving
CN115830265A (en) Automatic driving movement obstacle segmentation method based on laser radar
Zhang et al. Multi-modal virtual-real fusion based transformer for collaborative perception
Li et al. An efficient point cloud place recognition approach based on transformer in dynamic environment
CN110176022A (en) A kind of tunnel overall view monitoring system and method based on video detection
CN116901089B (en) Multi-angle vision distance robot control method and system
CN116202538B (en) Map matching fusion method, device, equipment and storage medium
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint
CN110135387B (en) Image rapid identification method based on sensor fusion
CN115439436B (en) Multi-type quality defect mobile sensing system for building structure
CN117372991A (en) Automatic driving method and system based on multi-view multi-mode fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination